Science.gov

Sample records for multisensor array processing

  1. Sensor fusion with on-line gas emission multisensor arrays and standard process measuring devices in baker's yeast manufacturing process.

    PubMed

    Mandenius, C F; Eklöv, T; Lundström, I

    1997-07-20

    The use of a multisensor array for measuring the emission from a production-scale baker's yeast manufacturing process is reported. The sensor array, containing 14 different gas-sensitive semiconductor devices and an infrared gas sensor, was used to monitor the gas emission from a yeast culture bioreactor during fed-batch operation. The signal pattern from the sensors was evaluated in relation to two key process variables, the cell mass and the ethanol concentrations. Fusion with the on-line sensor signals for reactor weight and aeration rate made it possible to estimate cell mass and ethanol concentration using computation with backpropagating artificial neural nets. Identification of process states with the same fusion of sensor signals was realized using principal component analysis. (c) 1997 John Wiley & Sons, Inc. Biotechnol Bioeng 55: 427-438, 1997.

  2. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that

  3. Information coding in artificial olfaction multisensor arrays.

    PubMed

    Albert, Keith J; Walt, David R

    2003-08-15

    High-density sensor arrays were prepared with microbead vapor sensors to explore and compare the information coded in sensor response profiles following odor stimulus. The coded information in the sensor-odor response profiles, which is used for odor discrimination purposes, was extracted from the microsensor arrays via two different approaches. In the first approach, the responses from individual microsensors were separated (decoded array) and independently processed. In the second approach, response profiles from all microsensors within the entire array, i.e., the sensor ensemble, were combined to create one response per odor stimulus (nondecoded array). Although the amount of response data is markedly reduced in the second approach, the system shows comparable odor discrimination rates for the two signal extraction methods. The ensemble approach streamlines system resources without decreasing system performance. These signal compression approaches may simulate or parallel information coding in the mammalian olfactory system. PMID:14632130

  4. Graphene- and graphene oxide- based multisensor arrays for selective gas analysis

    NASA Astrophysics Data System (ADS)

    Lipatov, Alexey; Varezhnikov, Alexey; Sysoev, Victor; Kolmakov, Andrei; Sinitskii, Alexander

    2014-03-01

    Arrays of nearly identical graphene devices on Si/SiO2 exhibit a substantial device-to-device variation, even in case of a high-quality chemical vapor deposition (CVD) or mechanically exfoliated graphene. We propose that such device-to-device variation could provide a platform for highly selective multisensor electronic olfactory systems. We fabricated a multielectode array of CVD graphene devices on a Si/SiO2 substrate, and demonstrated that the diversity of these devices is sufficient to reliably discriminate different short-chain alcohols: methanol, ethanol and isopropanol. The diversity of graphene devices on Si/SiO2 could possibly be used to construct multisensor systems trained to recognize other analytes as well. Similar multisensory arrays based on graphene oxide (GO) devices are also capable of discriminating these short-chain alcohols. We will discuss the possibility of chemical modification of GO for further increase the selectivity of GO multisensory arrays.

  5. Breath analysis system for early detection of lung diseases based on multi-sensor array

    NASA Astrophysics Data System (ADS)

    Jeon, Jin-Young; Yu, Joon-Boo; Shin, Jeong-Suk; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Expiratory breath contains various VOCs(Volatile Organic Compounds) produced from the human. When a certain disease exists, the exhalation has specific VOCs which may be generated from diseases. Many researchers have been actively working to find different types of biomarkers which are characteristic for particular diseases. Research regarding the identification of specific diseases from exhalation is still in progress. The aim of this research is to implement early detection of lung disease such as lung cancer and COPD(Chronic Obstructive Pulmonary Disease), which was nominated on the 6th of domestic death rate in 2010, based on multi-sensor array system. The system has been used to acquire sampled expiratory gases data and PCA(Principle Component Analysis) technique was applied to analyze signals from multi-sensor array. Throughout the experimental trials, a clearly distinguishable difference between lung disease patients and healthy controls was found from the measurement and analysis of their respective expiratory gases.

  6. Could We Apply a NeuroProcessor For Analyzing a Gas Response Of Multisensor Arrays?

    SciTech Connect

    Sysoev, V. V.; Musatov, V. Yu.; Maschenko, A. A.; Varegnikov, A. S.; Chrizostomov, A. A.; Kiselev, I.; Schneider, T.; Bruns, M.; Sommer, M.

    2009-05-23

    We describe an effort of implementation of hardware neuroprocessor to carry out pattern recognition of signals generated by a multisensor microarray of Electronic Nose type. The multisensor microarray is designed with the SnO{sub 2} thin film segmented by co-planar electrodes according to KAMINA (KArlsruhe Micro NAse) E-nose architecture. The response of this microarray to reducing gases mixtured with a synthetic air is processed by principal component analysis technique realized in PC (Matlab software) and the neural microprocessor NeuroMatrix NM6403. It is shown that the neuroprocessor is able to successfully carry out a gas-recognition algorithm at a real-time scale.

  7. Initial Field Measurements with the Multisensor Airborne Radiation Survey (MARS) High Purity Germanium (HPGe) Detector Array

    SciTech Connect

    Fast, James E.; Bonebrake, Christopher A.; Dorow, Kevin E.; Glasgow, Brian D.; Jensen, Jeffrey L.; Morris, Scott J.; Orrell, John L.; Pitts, W. Karl; Rohrer, John S.; Todd, Lindsay C.

    2010-06-29

    Abstract: The Multi-sensor Airborne Radiation Survey (MARS) project has developed a new single cryostat detector array design for high purity germanium (HPGe) gamma ray spectrometers that achieves the high detection efficiency required for stand-off detection and actionable characterization of radiological threats. This approach is necessary since a high efficiency HPGe detector can only be built as an array due to limitations in growing large germanium crystals. The system is ruggedized and shock mounted for use in a variety of field applications. This paper reports on results from initial field measurements conducted in a truck and on two different boats.

  8. A Radiosonde Using a Humidity Sensor Array with a Platinum Resistance Heater and Multi-Sensor Data Fusion

    PubMed Central

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  9. Humanitarian multisensor hand-held mine detector: design of a GPR array

    NASA Astrophysics Data System (ADS)

    Crisp, Graeme N.; Hill, Andrew

    2002-08-01

    At present the most effective mechanical aids for the post conflict hand clearance of anti-personnel mines are metal detectors and probes. These are effective against the majority of current mine threats but clearance rates are limited because of the high incidence of false targets in post conflict areas. Such false targets must be exposed and removed with the same care required for handling genuine ordnance. Clearance rates would be substantially improved if false targets detected by metal detectors could be distinguished from mine threats and thus left in place. One possible approach to the problem of differentiating between metal fragments and anti-personnel land mines is the use of multiple sensors. In this paper we discuss the design of a GPR for such a multi-sensor detector head. One of the challenges for combined metal detectors and GPR is the design of the GPR antenna so that it can operate effectively in the presence of metal detector coils. For a practicable device the GPR antennas must operate with the metal detector coils in their near field and coupling between sensors is of primary importance. The antennas must also be designed so that their influence on the metal detector's sensitivity is minimized. In this paper we present one solution for this problem and present experimental results showing the how the proposed GPR design operates in the presence of metal detector coils and in the presence of a resistive transducer located below the antenna array. The GPR concerned uses a 3x3 antenna array and post reception synthetic aperture processing to provide a 3d image of the ground underneath the sensor. Focussed images of various targets are presented, and images to demonstrate the effects of the other sensors on the GPR are shown.

  10. Optical sensors and multisensor arrays containing thin film electroluminescent devices

    DOEpatents

    Aylott, Jonathan W.; Chen-Esterlit, Zoe; Friedl, Jon H.; Kopelman, Raoul; Savvateev, Vadim N.; Shinar, Joseph

    2001-12-18

    Optical sensor, probe and array devices for detecting chemical biological, and physical analytes. The devices include an analyte-sensitive layer optically coupled to a thin film electroluminescent layer which activates the analyte-sensitive layer to provide an optical response. The optical response varies depending upon the presence of an analyte and is detected by a photodetector and analyzed to determine the properties of the analyte.

  11. Multi-Sensor Arrays for Online Monitoring of Cell Dynamics in in vitro Studies with Choroid Plexus Epithelial Cells

    PubMed Central

    Mestres-Ventura, Pedro; Morguet, Andrea; de las Heras, Soledad García Gómez

    2012-01-01

    Sensors and multi-sensor arrays are the basis of new technologies for the non-label monitoring of cell activity. In this paper we show that choroid plexus cells can be cultured on silicon chips and that sensors register in real time changes in their activity, constituting an interesting experimental paradigm for cell biology and medical research. To validate the signals recorded (metabolism = peri-cellular acidification, oxygen consumption = respiration; impedance = adhesion, cell shape and motility) we performed experiments with compounds that act in a well-known way on cells, influencing these parameters. Our in vitro model demonstrates the advantages of multi-sensor arrays in assessment and experimental characterization of dynamic cellular events—in this case in choroid plexus functions, however with applicability to other cell types as well. PMID:22438715

  12. Multi-sensor Array for High Altitude Balloon Missions to the Stratosphere

    NASA Astrophysics Data System (ADS)

    Davis, Tim; McClurg, Bryce; Sohl, John

    2008-10-01

    We have designed and built a microprocessor controlled and expandable multi-sensor array for data collection on near space missions. Weber State University has started a high altitude research balloon program called HARBOR. This array has been designed to data log a base set of measurements for every flight and has room for six guest instruments. The base measurements are absolute pressure, on-board temperature, 3-axis accelerometer for attitude measurement, and 2-axis compensated magnetic compass. The system also contains a real time clock and circuitry for logging data directly to a USB memory stick. In typical operation the measurements will be cycled through in sequence and saved to the memory stick along with the clock's time stamp. The microprocessor can be reprogrammed to adapt to guest experiments with either analog or digital interfacing. This system will fly with every mission and will provide backup data collection for other instrumentation for which the primary task is measuring atmospheric pressure and temperature. The attitude data will be used to determine the orientation of the onboard camera systems to aid in identifying features in the images. This will make these images easier to use for any future GIS (geographic information system) remote sensing missions.

  13. Concept of data processing in multisensor system for perimeter protection

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Kastek, M.; Trzaskawka, P.; Piątkowski, T.; Szustakowski, M.; Życzkowski, M.

    2011-06-01

    The nature of recent terrorist attacks and military conflicts as well as the necessity to protect bases, convoys and patrols gave serious impact to the development of more effective security systems. Widely-used so far concepts of perimeter protection with zone sensors will be replaced in the near future with multi-sensor systems. This kind of systems can utilize day/night cameras, IR uncooled thermal cameras as well as millimeter-wave radars detecting radiation reflected from target. Ranges of detection, recognition and identification for all targets depends on the parameters of the sensors used and the observed scene itself. Apart from the sensors the most important elements that influence the system effectiveness is intelligent data analysis and a proper data fusion algorithm. A multi-sensor protection system allows to achieve significant improvement of detection probability of intruder. The concept of data fusion in multi-sensor system has been introduced. It is based on image fusion algorithm which allows visualizing and tracking intruders under any conditions.

  14. Humanitarian multisensor hand-held mine detector: exploitation of ancillary data in GPR processing

    NASA Astrophysics Data System (ADS)

    Crisp, Graeme N.; Hill, Andrew

    2002-08-01

    QinetiQ is developing a hand held Multi-sensor mine detector prototype for humanitarian applications. The sensor consists of a GPR, a metal detector and ancillary sensors. This paper describes how data produced by ancillary sensors can be exploited in order to assist the GPR processing. The GPR consists of a 3x3 array of antennas, and focused images of the volume beneath the sensor are formed by post reception synthetic aperture processing. The mine detector is intended to detect sub surface targets, and an accurate knowledge of the ground surface position relative to the sensor is required. Also the high frequency dielectric constant of the ground medium is required in order to produce focused images. This paper analyses the requirements for good post reception synthetic aperture processing. The accuracy of the ground surface position data and the dielectric constant estimation are determined. A model for soil dielectric constant is used to derive the sensitivity of post reception synthetic aperture processing to unknown soil texture. It is show that for the GPR configuration considered, a wide range of texture variations is tolerable provided the soil moisture can be accurately estimated. Variations in soil composition are also tolerable.

  15. Array signal processing

    SciTech Connect

    Haykin, S.; Justice, J.H.; Owsley, N.L.; Yen, J.L.; Kak, A.C.

    1985-01-01

    This is the first book to be devoted completely to array signal processing, a subject that has become increasingly important in recent years. The book consists of six chapters. Chapter 1, which is introductory, reviews some basic concepts in wave propagation. The remaining five chapters deal with the theory and applications of array signal processing in (a) exploration seismology, (b) passive sonar, (c) radar, (d) radio astronomy, and (e) tomographic imaging. The various chapters of the book are self-contained. The book is written by a team of five active researchers, who are specialists in the individual fields covered by the pertinent chapters.

  16. Multisensor Network System for Wildfire Detection Using Infrared Image Processing

    PubMed Central

    Bosch, I.; Serrano, A.; Vergara, L.

    2013-01-01

    This paper presents the next step in the evolution of multi-sensor wireless network systems in the early automatic detection of forest fires. This network allows remote monitoring of each of the locations as well as communication between each of the sensors and with the control stations. The result is an increased coverage area, with quicker and safer responses. To determine the presence of a forest wildfire, the system employs decision fusion in thermal imaging, which can exploit various expected characteristics of a real fire, including short-term persistence and long-term increases over time. Results from testing in the laboratory and in a real environment are presented to authenticate and verify the accuracy of the operation of the proposed system. The system performance is gauged by the number of alarms and the time to the first alarm (corresponding to a real fire), for different probability of false alarm (PFA). The necessity of including decision fusion is thereby demonstrated. PMID:23843734

  17. Metal oxide based multisensor array and portable database for field analysis of antioxidants

    PubMed Central

    Sharpe, Erica; Bradley, Ryan; Frasco, Thalia; Jayathilaka, Dilhani; Marsh, Amanda; Andreescu, Silvana

    2014-01-01

    We report a novel chemical sensing array based on metal oxide nanoparticles as a portable and inexpensive paper-based colorimetric method for polyphenol detection and field characterization of antioxidant containing samples. Multiple metal oxide nanoparticles with various polyphenol binding properties were used as active sensing materials to develop the sensor array and establish a database of polyphenol standards that include epigallocatechin gallate, gallic acid, resveratrol, and Trolox among others. Unique charge-transfer complexes are formed between each polyphenol and each metal oxide on the surface of individual sensors in the array, creating distinct optically detectable signals which have been quantified and logged into a reference database for polyphenol identification. The field-portable Pantone/X-Rite© CapSure® color reader was used to create this database and to facilitate rapid colorimetric analysis. The use of multiple metal-oxide sensors allows for cross-validation of results and increases accuracy of analysis. The database has enabled successful identification and quantification of antioxidant constituents within real botanical extractions including green tea. Formation of charge-transfer complexes is also correlated with antioxidant activity exhibiting electron transfer capabilities of each polyphenol. The antioxidant activity of each sample was calculated and validated against the oxygen radical absorbance capacity (ORAC) assay showing good comparability. The results indicate that this method can be successfully used for a more comprehensive analysis of antioxidant containing samples as compared to conventional methods. This technology can greatly simplify investigations into plant phenolics and make possible the on-site determination of antioxidant composition and activity in remote locations. PMID:24610993

  18. Metal oxide based multisensor array and portable database for field analysis of antioxidants.

    PubMed

    Sharpe, Erica; Bradley, Ryan; Frasco, Thalia; Jayathilaka, Dilhani; Marsh, Amanda; Andreescu, Silvana

    2014-03-31

    We report a novel chemical sensing array based on metal oxide nanoparticles as a portable and inexpensive paper-based colorimetric method for polyphenol detection and field characterization of antioxidant containing samples. Multiple metal oxide nanoparticles with various polyphenol binding properties were used as active sensing materials to develop the sensor array and establish a database of polyphenol standards that include epigallocatechin gallate, gallic acid, resveratrol, and Trolox among others. Unique charge-transfer complexes are formed between each polyphenol and each metal oxide on the surface of individual sensors in the array, creating distinct optically detectable signals which have been quantified and logged into a reference database for polyphenol identification. The field-portable Pantone/X-Rite© CapSure® color reader was used to create this database and to facilitate rapid colorimetric analysis. The use of multiple metal-oxide sensors allows for cross-validation of results and increases accuracy of analysis. The database has enabled successful identification and quantification of antioxidant constituents within real botanical extractions including green tea. Formation of charge-transfer complexes is also correlated with antioxidant activity exhibiting electron transfer capabilities of each polyphenol. The antioxidant activity of each sample was calculated and validated against the oxygen radical absorbance capacity (ORAC) assay showing good comparability. The results indicate that this method can be successfully used for a more comprehensive analysis of antioxidant containing samples as compared to conventional methods. This technology can greatly simplify investigations into plant phenolics and make possible the on-site determination of antioxidant composition and activity in remote locations. PMID:24610993

  19. Multi-sensor fusion development

    NASA Astrophysics Data System (ADS)

    Bish, Sheldon; Rohrer, Matthew; Scheffel, Peter; Bennett, Kelly

    2016-05-01

    The U.S. Army Research Laboratory (ARL) and McQ Inc. are developing a generic sensor fusion architecture that involves several diverse processes working in combination to create a dynamic task-oriented, real-time informational capability. Processes include sensor data collection, persistent and observational data storage, and multimodal and multisensor fusion that includes the flexibility to modify the fusion program rules for each mission. Such a fusion engine lends itself to a diverse set of sensing applications and architectures while using open-source software technologies. In this paper, we describe a fusion engine architecture that combines multimodal and multi-sensor fusion within an Open Standard for Unattended Sensors (OSUS) framework. The modular, plug-and-play architecture of OSUS allows future fusion plugin methodologies to have seamless integration into the fusion architecture at the conceptual and implementation level. Although beyond the scope of this paper, this architecture allows for data and information manipulation and filtering for an array of applications.

  20. Distributed multisensor processing, decision making, and control under constrained resources for remote health and environmental monitoring

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Sheikh, Tanwir; Chandramouli, Lavanya

    2004-04-01

    Previous field-deployable distributed sensing systems for health/biomedical applications and environmental sensing have been designed for data collection and data transmission at pre-set intervals, rather than for on-board processing These previous sensing systems lack autonomous capabilities, and have limited lifespans. We propose the use of an integrated machine learning architecture, with automated planning-scheduling and resource management capabilities that can be used for a variety of autonomous sensing applications with very limited computing, power, and bandwidth resources. We lay out general solutions for efficient processing in a multi-tiered (three-tier) machine learning framework that is suited for remote, mobile sensing systems. Novel dimensionality reduction techniques that are designed for classification are used to compress each individual sensor data and pass only relevant information to the mobile multisensor fusion module (second-tier). Statistical classifiers that are capable of handling missing/partial sensory data due to sensor failure or power loss are used to detect critical events and pass the information to the third tier (central server) for manual analysis and/or analysis by advanced pattern recognition techniques. Genetic optimisation algorithms are used to control the system in the presence of dynamic events, and also ensure that system requirements (i.e. minimum life of the system) are met. This tight integration of control optimisation and machine learning algorithms results in a highly efficient sensor network with intelligent decision making capabilities. The applicability of our technology in remote health monitoring and environmental monitoring is shown. Other uses of our solution are also discussed.

  1. Multi-sensor magnetoencephalography with atomic magnetometers

    NASA Astrophysics Data System (ADS)

    Johnson, Cort N.; Schwindt, P. D. D.; Weisend, M.

    2013-09-01

    The authors have detected magnetic fields from the human brain with two independent, simultaneously operating rubidium spin-exchange-relaxation-free magnetometers. Evoked responses from auditory stimulation were recorded from multiple subjects with two multi-channel magnetometers located on opposite sides of the head. Signal processing techniques enabled by multi-channel measurements were used to improve signal quality. This is the first demonstration of multi-sensor atomic magnetometer magnetoencephalography and provides a framework for developing a non-cryogenic, whole-head magnetoencephalography array for source localization.

  2. Multi-sensor magnetoencephalography with atomic magnetometers

    PubMed Central

    Johnson, Cort N; Schwindt, P D D; Weisend, M

    2014-01-01

    The authors have detected magnetic fields from the human brain with two independent, simultaneously operating rubidium spin-exchange-relaxation-free magnetometers. Evoked responses from auditory stimulation were recorded from multiple subjects with two multi-channel magnetometers located on opposite sides of the head. Signal processing techniques enabled by multi-channel measurements were used to improve signal quality. This is the first demonstration of multi-sensor atomic magnetometer magnetoencephalography and provides a framework for developing a non-cryogenic, whole-head magnetoencephalography array for source localization. PMID:23939051

  3. Proposed MIDAS II processing array

    SciTech Connect

    Meng, J.

    1982-03-01

    MIDAS (Modular Interactive Data Analysis System) is a ganged processor scheme used to interactively process large data bases occurring as a finite sequence of similar events. The existing device uses a system of eight ganged minicomputer central processor boards servicing a rotating group of 16 memory blocks. A proposal for MIDAS II, the successor to MIDAS, is to use a much larger number of ganged processors, one per memory block, avoiding the necessity of switching memories from processor to processor. To be economic, MIDAS II must use a small, relatively fast and inexpensive microprocessor, such as the TMS 9995. This paper analyzes the use of the TMS 9995 applied to the MIDAS II processing array, emphasizing computational, architectural and physical characteristics which make the use of the TMS 9995 attractive for this application.

  4. Airborne multisensor pod system (AMPS) data: Multispectral data integration and processing hints

    SciTech Connect

    Leary, T.J.; Lamb, A.

    1996-11-01

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airborne Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.

  5. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  6. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  7. Characterizing the Propagation of Uterine Electrophysiological Signals Recorded with a Multi-Sensor Abdominal Array in Term Pregnancies.

    PubMed

    Escalona-Vargas, Diana; Govindan, Rathinaswamy B; Furdea, Adrian; Murphy, Pam; Lowery, Curtis L; Eswaran, Hari

    2015-01-01

    The objective of this study was to quantify the number of segments that have contractile activity and determine the propagation speed from uterine electrophysiological signals recorded over the abdomen. The uterine magnetomyographic (MMG) signals were recorded with a 151 channel SARA (SQUID Array for Reproductive Assessment) system from 36 pregnant women between 37 and 40 weeks of gestational age. The MMG signals were scored and segments were classified based on presence of uterine contractile burst activity. The sensor space was then split into four quadrants and in each quadrant signal strength at each sample was calculated using center-of-gravity (COG). To this end, the cross-correlation analysis of the COG was performed to calculate the delay between pairwise combinations of quadrants. The relationship in propagation across the quadrants was quantified and propagation speeds were calculated from the delays. MMG recordings were successfully processed from 25 subjects and the average values of propagation speeds ranged from 1.3-9.5 cm/s, which was within the physiological range. The propagation was observed between both vertical and horizontal quadrants confirming multidirectional propagation. After the multiple pairwise test (99% CI), significant differences in speeds can be observed between certain vertical or horizontal combinations and the crossed pair combinations. The number of segments containing contractile activity in any given quadrant pair with a detectable delay was significantly higher in the lower abdominal pairwise combination as compared to all others. The quadrant-based approach using MMG signals provided us with high spatial-temporal information of the uterine contractile activity and will help us in the future to optimize abdominal electromyographic (EMG) recordings that are practical in a clinical setting. PMID:26505624

  8. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  9. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, Param H.; Hunt, Arlon J.

    1986-01-01

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO.sub.2, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO.sub.2, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40.degree. C. instead of at about 270.degree. C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementry particles or cosmic rays.

  10. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, P.H.; Hunt, A.J.

    1985-09-04

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO/sub 2/, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO/sub 2/, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40/sup 0/C instead of at about 270/sup 0/C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementary particles or cosmic rays.

  11. Barrow real-time sea ice mass balance data: ingestion, processing, dissemination and archival of multi-sensor data

    NASA Astrophysics Data System (ADS)

    Grimes, J.; Mahoney, A. R.; Heinrichs, T. A.; Eicken, H.

    2012-12-01

    Sensor data can be highly variable in nature and also varied depending on the physical quantity being observed, sensor hardware and sampling parameters. The sea ice mass balance site (MBS) operated in Barrow by the University of Alaska Fairbanks (http://seaice.alaska.edu/gi/observatories/barrow_sealevel) is a multisensor platform consisting of a thermistor string, air and water temperature sensors, acoustic altimeters above and below the ice and a humidity sensor. Each sensor has a unique specification and configuration. The data from multiple sensors are combined to generate sea ice data products. For example, ice thickness is calculated from the positions of the upper and lower ice surfaces, which are determined using data from downward-looking and upward-looking acoustic altimeters above and below the ice, respectively. As a data clearinghouse, the Geographic Information Network of Alaska (GINA) processes real time data from many sources, including the Barrow MBS. Doing so requires a system that is easy to use, yet also offers the flexibility to handle data from multisensor observing platforms. In the case of the Barrow MBS, the metadata system needs to accommodate the addition of new and retirement of old sensors from year to year as well as instrument configuration changes caused by, for example, spring melt or inquisitive polar bears. We also require ease of use for both administrators and end users. Here we present the data and processing steps of using sensor data system powered by the NoSQL storage engine, MongoDB. The system has been developed to ingest, process, disseminate and archive data from the Barrow MBS. Storing sensor data in a generalized format, from many different sources, is a challenging task, especially for traditional SQL databases with a set schema. MongoDB is a NoSQL (not only SQL) database that does not require a fixed schema. There are several advantages using this model over the traditional relational database management system (RDBMS

  12. Array Signal Processing for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Veen, Alle Jan; Leshem, Amir; Boonstra, Albert Jan

    2004-06-01

    Radio astronomy forms an interesting application area for array signal processing techniques. Current synthesis imaging telescopes consist of a small number of identical dishes, which track a fixed patch in the sky and produce estimates of the time-varying spatial covariance matrix. The observations sometimes are distorted by interference, e.g., from radio, TV, radar or satellite transmissions. We describe some of the tools that array signal processing offers to filter out the interference, based on eigenvalue decompositions and factor analysis, which is a more general technique applicable to partially calibrated arrays. We consider detection of interference, spatial filtering techniques using projections, and discuss how a reference antenna pointed at the interferer can improve the performance. We also consider image formation and its relation to beamforming.

  13. Semiotic foundation for multisensor-multilook fusion

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.

    1998-07-01

    This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.

  14. Photorefractive processing for large adaptive phased arrays.

    PubMed

    Weverka, R T; Wagner, K; Sarto, A

    1996-03-10

    An adaptive null-steering phased-array optical processor that utilizes a photorefractive crystal to time integrate the adaptive weights and null out correlated jammers is described. This is a beam-steering processor in which the temporal waveform of the desired signal is known but the look direction is not. The processor computes the angle(s) of arrival of the desired signal and steers the array to look in that direction while rotating the nulls of the antenna pattern toward any narrow-band jammers that may be present. We have experimentally demonstrated a simplified version of this adaptive phased-array-radar processor that nulls out the narrow-band jammers by using feedback-correlation detection. In this processor it is assumed that we know a priori only that the signal is broadband and the jammers are narrow band. These are examples of a class of optical processors that use the angular selectivity of volume holograms to form the nulls and look directions in an adaptive phased-array-radar pattern and thereby to harness the computational abilities of three-dimensional parallelism in the volume of photorefractive crystals. The development of this processing in volume holographic system has led to a new algorithm for phased-array-radar processing that uses fewer tapped-delay lines than does the classic time-domain beam former. The optical implementation of the new algorithm has the further advantage of utilization of a single photorefractive crystal to implement as many as a million adaptive weights, allowing the radar system to scale to large size with no increase in processing hardware.

  15. Intelligent multi-sensor integrations

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Jain, Ramesh; Weymouth, Terry

    1989-01-01

    Growth in the intelligence of space systems requires the use and integration of data from multiple sensors. Generic tools are being developed for extracting and integrating information obtained from multiple sources. The full spectrum is addressed for issues ranging from data acquisition, to characterization of sensor data, to adaptive systems for utilizing the data. In particular, there are three major aspects to the project, multisensor processing, an adaptive approach to object recognition, and distributed sensor system integration.

  16. Image processing on MPP-like arrays

    SciTech Connect

    Coletti, N.B.

    1983-01-01

    The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.

  17. Compact multisensor laser scanning head for processing and monitoring microspot welding

    NASA Astrophysics Data System (ADS)

    Hafez, Moustapha; Julliard, Karin; Grossmann, Sylvain; Olivetta, Lino; Sidler, Thomas C.; Salathe, Rene-Paul; Schwob, Hans-Peter; Blom, Toon; Hoving, Willem

    2000-11-01

    In order to improve the reliability of micro-spot welding of metal parts in production such as e.g. in electron guns for TV picture tubes, real-time information about the evolution of the welding process should be available to allow on-line modification of the laser parameters. Such information can be derived from a set of sensors that are mounted on a laser-scanning head. Different sensors are used to monitor the optical fiber output power to determine the evolution of temperature during the spot welding process, to measure plasma emission and back-reflected laser light. A vision channel and a CCD camera are used to control the position of the laser spot on the parts to be processed. The compact scanning head is composed of a tip/tilt laser scanner, a collimating lens and a focusing lens. The scanner is fast steering, with a bandwidth of 700Hz, and can tilt by +/- 3.5 degree(s) with a repeatability better than 50(mu) rad. The settling time for maximum deflection is less that 10ms. The scanning lens is a newly developed focusing lens designed to replace commercial cumbersome scanning lenses such as F-(theta) lenses, which have large volume, weight and price. This lens is based on the well-known Cooke triplet design and guarantees a constant shape of the spot all over the scan surface and is specially well suited for high power beam delivery. The scan field achieved by the system is limited to 25mm x 25mm. The laser used for this application is a pulsed Nd:YAG laser delivered by an optical fiber to the optical head. However, the system can be adapted to different types of lasers. Laser micro-spot welding on copper substrate has been performed in the frame of the Brite-Euram project MAIL. Smaller tolerances (a factor of 2 less) on the spot diameters were obtained in the case of a sensor controlled operation compared to the case where sensor control is not used.

  18. Multisensor surveillance data augmentation and prediction with optical multipath signal processing

    NASA Astrophysics Data System (ADS)

    Bush, G. T., III

    1980-12-01

    The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.

  19. Gallium arsenide processing for gate array logic

    NASA Technical Reports Server (NTRS)

    Cole, Eric D.

    1989-01-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  20. Array signal processing in the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  1. Northeast Artificial Intelligence Consortium (NAIC). Volume 16. Intelligent signal-processing techniques for multi-sensor surveillance systems. Final report, Sep 84-Dec 89

    SciTech Connect

    Rhody, H.E.; Gayvert, R.T.

    1990-12-01

    The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and Office of Scientific Research. Its purpose was to conduct pertinent research in artificial intelligence and to perform activities ancillary to this research. This report describes progress during the existence of the NAIC on the technical research tasks undertaken at the member universities. The topics covered in general are: versatile expert system for equipment maintenance, distributed AI for communications system control, automatic photointerpretation, time-oriented problem solving, speech understanding systems, knowledge based reasoning and planning, and a knowledge acquisition, assistance, and explanation system. The specific topic for this volume is intelligent signal processing techniques for multi-sensor surveillance systems.

  2. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System

    PubMed Central

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-01-01

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects. PMID:27517935

  3. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System.

    PubMed

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-01-01

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects. PMID:27517935

  4. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System.

    PubMed

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-08-10

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects.

  5. Multisensor image fusion techniques in remote sensing

    NASA Astrophysics Data System (ADS)

    Ehlers, Manfred

    Current and future remote sensing programs such as Landsat, SPOT, MOS, ERS, JERS, and the space platform's Earth Observing System (Eos) are based on a variety of imaging sensors that will provide timely and repetitive multisensor earth observation data on a global scale. Visible, infrared and microwave images of high spatial and spectral resolution will eventually be available for all parts of the earth. It is essential that efficient processing techniques be developed to cope with the large multisensor data volumes. This paper discusses data fusion techniques that have proved successful for synergistic merging of SPOT HRV, Landsat TM and SIR-B images. It is demonstrated that these techniques can be used to improve rectification accuracies, to depicit greater cartographic detail, and to enhance spatial resolution in multisensor image data sets.

  6. Hierarchical Robot Control In A Multisensor Environment

    NASA Astrophysics Data System (ADS)

    Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari

    1987-03-01

    Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas

  7. An introduction to multisensor data fusion

    SciTech Connect

    Hall, D.L.; Llinas, J.

    1997-01-01

    Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion.

  8. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  9. Multisensor Investigation of Deep Convection

    NASA Astrophysics Data System (ADS)

    Houze, R.; Yuan, J.; Barnes, H. C.; Brodzik, S. R.

    2012-12-01

    The array of sensors for studying cloud systems from space provides the opportunity to globally map the occurrence of various types of deep convective cloud systems more precisely than ever before. The revolutionary TRMM satellite has not only determined rainfall from space but also identified the structures of storms producing the rainfall and how the different types of convective structures relate to features of the global circulation. The multiple sensors of the A-Train constellation have added more capacity to globally map convective cloud system types. By simultaneously using Aqua's MODIS 11-micron brightness temperature sensor to map cloud-top size and coldness, Aqua's AMSR-E passive microwave to detect rainfall, and CloudSat's cloud radar observations to see the internal structure of the nonprecipitating anvil clouds extending laterally from the precipitating cores of mesoscale convective systems (MCSs), we have objectively identified and mapped different types of MCSs. This multisensor analysis has determined the degrees to which MCSs vary according to size, amount of anvil cloud, and whether or not they occur separately or in merged complexes. Using these multisensor-derived quantities, we have established the patterns in which tropical MCSs occur over land, ocean, or the maritime continent. Ongoing work is integrating more sensors and other innovative global datasets into the analysis of A-Train data to further knowledge of MCSs and their variability over the Earth. Global lightning data are being integrated with the A-Train data to better understand convective intensity in different types of MCSs. Environments of the MCSs identified by multisensor A-Train analysis are being further analyzed using AIRS temperature profiles and MODIS and CALIPSO aerosol fields to better document the influence of environmental properties on the different types of mesoscale system. The integration of aerosol loading into the global analysis of the patterns of occurrence of

  10. Implementation and use of systolic array processes

    SciTech Connect

    Kung, H.T.

    1983-01-01

    Major effort are now underway to use systolic array processors in large, real-life applications. The author examines various implementation issues and alternatives, the latter from the viewpoints of flexibility and interconnection topologies. He then identifies some work that is essential to the eventual wide use of systolic array processors, such as the development of building blocks, system support and suitable algorithms. 24 references.

  11. Multisensor system for mine detection

    NASA Astrophysics Data System (ADS)

    Duvoisin, Herbert A., III; Steinway, William J.; Tomassi, Mark S.; Thomas, James E.; Morris, Carl A.; Kahn, Barry A.; Stern, Peter H.; Krywick, Scott; Johnson, Kevin; Dennis, Kevin; Betts, George; Blood, Ben; Simoneaux, Wanda; Miller, John L.

    1998-10-01

    A multi-sensor approach to buried object discrimination has been developed by Coleman Research Corporation (CRC) as a practical successor to currently prevalent metal detectors. The CRC multi-sensor unit integrates with and complements standard metal detectors to enable the detection of low- metallic and non-metallic anti-tank and anti-personnel mines as well as the older metallic-jacketed mines. The added sensors include Ground Penetration Radar (GPR) and Infrared (IR). The GPR consists of a lightweight (less than 1 LB) snap on antenna unit, a belt attached electronics unit (less than 5 LB) and batteries. The IR consists of a lightweight (less than 3 LB) head mounted camera, a heads-up virtual display, and a belt attached processing unit (Figure 1.1). The output from Automatic Target Recognition algorithms provide the detection of metallic and non-metallic mines in real-time on the IR display and as an audio alert from the GPR and MD.

  12. Regular Arrays of QDs by Solution Processing

    NASA Astrophysics Data System (ADS)

    Oliva, Brittany L.

    2011-12-01

    Hydrophilic silicon and germanium quantum dots were synthesized by a "bottom-up" method utilizing micelles to control particle size. Liquid phase deposition of silica on these quantum dots was successful with and without DTAB (dodecyltrimethylammonium bromide) as a surfactant to yield uniform spheres. Coating the quantum dots in the presence of DTAB allowed for better size control. The silica coated quantum dots were then arrayed in three dimensions using a vertical deposition technique on quartz slides or ITO glass. UV-vis absorbance, AFM, SEM, and TEM images were used to analyze the particles at every stage. The photoconductivity of the arrays was tested, and the cells were found to be conductive in areas.

  13. The application of systolic arrays to radar signal processing

    NASA Astrophysics Data System (ADS)

    Spearman, R.; Spracklen, C. T.; Miles, J. H.

    The design of a systolic array processor radar system is examined, and its performance is compared to that of a conventional radar processor. It is shown how systolic arrays can be used to replace the boards of high speed logic normally associated with a high performance radar and to implement all of the normal processing functions associated with such a system. Multifunctional systolic arrays are presented that have the flexibility associated with a general purpose digital processor but the speed associated with fixed function logic arrays.

  14. Fabrication of Nanohole Array via Nanodot Array Using Simple Self-Assembly Process of Diblock Copolymer

    NASA Astrophysics Data System (ADS)

    Matsuyama, Tsuyoshi; Kawata, Yoshimasa

    2007-06-01

    We present a simple self-assembly process for fabricating a nanohole array via a nanodot array on a glass substrate by dripping ethanol onto the nanodot array. It is found that well-aligned arrays of nanoholes as well as nanodots are formed on the whole surface of the glass. A dot is transformed into a hole, and the alignment of the nanodots strongly reflects that of the nanoholes. We find that the change in the depth of holes agrees well with the change in the surface energy with the ethanol concentration in the aqueous solution. We believe that the interfacial energy between the nanodots and the dripped ethanol causes the transformation from nanodots into nanoholes. The nanohole arrays are directly applicable to molds for nanopatterned media used in high-density near-field optical data storage. The bit data can be stored and read out using probes with small apertures.

  15. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  16. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  17. Evaluation of the warm cloud microphysical processes in global models using the CloudSat/A-Train multi-sensor satellite observations

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Bodas-Salcedo, A.; Golaz, J.; Yokohata, T.; Wang, M.; Stephens, G. L.

    2012-12-01

    Warm cloud microphysical processes in global models are evaluated using the CloudSat and A-Train multi-sensor satellite observations to characterize the behaviors of microphysics parameterizations and to identify the fundamental model biases in representing the processes. Methodologies recently developed to analyze the CloudSat and A-Train satellite observations are employed to construct the statistics that dictate process-level signatures of the cloud-to-rain water conversion. The methodologies include the analyses of (i) the probability of precipitation as a function of liquid water path describing how the water conversion process occurs, (ii) the interrelationships between the radar reflectivity and the particle size as a proxy for the condensation and coalescence processes, and (iii) the vertical microphysical structures depicted by the radar reflectivity profiles re-scaled as a function of the cloud optical depth. We apply the methodologies to both the satellite observations and the global model results to compare the statistics among different models as well as between the models and the observations. The models studied include the state-of-the-art global climate models (i.e. the UKMO, GFDL, and MIROC models) and a multi-scale modeling framework (MMF) model (i.e. the PNNL-MMF model), which are all implemented with the CFMIP Observation Simulator Package (COSP) satellite signal simulators for appropriate comparisons to the satellite observations. Given the capability of the methodologies to depict the process-level characteristics of the warm rain formation, their applications to the COSP-based model results reveal how the warm rain processes are represented in the models. Their comparisons to the corresponding statistics from satellite observations then characterize the model behaviors against the observations in terms of the liquid cloud microphysical processes. A possible way of understanding and reducing the model biases is also discussed with the aid of a

  18. Multi-Sensor Distributive On-Line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Technical Reports Server (NTRS)

    Teng, William; Berrick, Steve; Leptuokh, Gregory; Liu, Zhong; Rui, Hualan; Pham, Long; Shen, Suhung; Zhu, Tong

    2004-01-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM On-line Visualization and Analysis System precipitation and other satellite data products and services. AIS outputs will be ,integrated into existing operational decision support system for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that ,allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical ,parameters, area of interest, and time period; and the system generates an output ,on screen in a matter of seconds.

  19. Sonar array processing borrows from geophysics

    SciTech Connect

    Chen, K.

    1989-09-01

    The author reports a recent advance in sonar signal processing that has potential military application. It improves signal extraction by modifying a technique devised by a geophysicist. Sonar signal processing is used to track submarine and surface targets, such as aircraft carriers, oil tankers, and, in commercial applications, schools of fish or sunken treasure. Similar signal-processing techniques help radio astronomers track galaxies, physicians see images of the body interior, and geophysicists map the ocean floor or find oil. This hydrid technique, applied in an experimental system, can help resolve strong signals as well as weak ones in the same step.

  20. A smart multisensor approach to assist blind people in specific urban navigation tasks.

    PubMed

    Ando, B

    2008-12-01

    Visually impaired people are often discouraged in using electronic aids due to complexity of operation, large amount of training, nonoptimized degree of information provided to the user, and high cost. In this paper, a new multisensor architecture is discussed, which would help blind people to perform urban mobility tasks. The device is based on a multisensor strategy and adopts smart signal processing.

  1. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  2. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  3. 50 years of progress in microphone arrays for speech processing

    NASA Astrophysics Data System (ADS)

    Elko, Gary W.; Frisk, George V.

    2004-10-01

    In the early 1980s, Jim Flanagan had a dream of covering the walls of a room with microphones. He occasionally referred to this concept as acoustic wallpaper. Being a new graduate in the field of acoustics and signal processing, it was fortunate that Bell Labs was looking for someone to investigate this area of microphone arrays for telecommunication. The job interview was exciting, with all of the big names in speech signal processing and acoustics sitting in the audience, many of whom were the authors of books and articles that were seminal contributions to the fields of acoustics and signal processing. If there ever was an opportunity of a lifetime, this was it. Fortunately, some of the work had already begun, and Sessler and West had already laid the groundwork for directional electret microphones. This talk will describe some of the very early work done at Bell Labs on microphone arrays and reflect on some of the many systems, from large 400-element arrays, to small two-microphone arrays. These microphone array systems were built under Jim Flanagan's leadership in an attempt to realize his vision of seamless hands-free speech communication between people and the communication of people with machines.

  4. Generic nano-imprint process for fabrication of nanowire arrays.

    PubMed

    Pierret, Aurélie; Hocevar, Moïra; Diedenhofen, Silke L; Algra, Rienk E; Vlieg, E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 degrees C for InP and 700 degrees C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  5. Generic nano-imprint process for fabrication of nanowire arrays

    NASA Astrophysics Data System (ADS)

    Pierret, Aurélie; Hocevar, Moïra; Diedenhofen, Silke L.; Algra, Rienk E.; Vlieg, E.; Timmering, Eugene C.; Verschuuren, Marc A.; Immink, George W. G.; Verheijen, Marcel A.; Bakkers, Erik P. A. M.

    2010-02-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 °C for InP and 700 °C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  6. Parallel Processing of Large Scale Microphone Arrays for Sound Capture

    NASA Astrophysics Data System (ADS)

    Jan, Ea-Ee.

    1995-01-01

    Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be

  7. True-Time-Delay Adaptive Array Processing Using Photorefractive Crystals

    NASA Astrophysics Data System (ADS)

    Kriehn, G. R.; Wagner, K.

    Radio frequency (RF) signal processing has proven to be a fertile application area when using photorefractive-based, optical processing techniques. This is due to a photorefractive material's capability to record gratings and diffract off these gratings with optically modulated beams that contain a wide RF bandwidth, and include applications such as the bias-free time-integrating correlator [1], adaptive signal processing, and jammer excision, [2, 3, 4]. Photorefractive processing of signals from RF antenna arrays is especially appropriate because of the massive parallelism that is readily achievable in a photorefractive crystal (in which many resolvable beams can be incident on a single crystal simultaneously—each coming from an optical modulator driven by a separate RF antenna element), and because a number of approaches for adaptive array processing using photorefractive crystals have been successfully investigated [5, 6]. In these types of applications, the adaptive weight coefficients are represented by the amplitude and phase of the holographic gratings, and many millions of such adaptive weights can be multiplexed within the volume of a photorefractive crystal. RF modulated optical signals from each array element are diffracted from the adaptively recorded photorefractive gratings (which can be multiplexed either angularly or spatially), and are then coherently combined with the appropriate amplitude weights and phase shifts to effectively steer the angular receptivity pattern of the antenna array toward the desired arriving signal. Likewise, the antenna nulls can also be rotated toward unwanted narrowband jammers for extinction, thereby optimizing the signal-to-interference-plus-noise ratio.

  8. Change detection in very high resolution multisensor optical images

    NASA Astrophysics Data System (ADS)

    Solano Correa, Yady T.; Bovolo, Francesca; Bruzzone, Lorenzo

    2014-10-01

    This work aims at developing an approach to the detection of changes in multisensor multitemporal VHR optical images. The main steps of the proposed method are: i) multisensor data homogenization; and ii) change detection in multisensor multitemporal VHR optical images. The proposed approach takes advantage of: the conversion to physical quantities suggested by Pacifici et. al.1 , the framework for the design of systems for change detection in VHR images presented by Bruzzone and Bovolo2 and the framework for unsupervised change detection presented by Bovolo and Bruzzone3. Multisensor data homogenization is achieved during pre-processing by taking into account differences in both radiometric and geometric dimensions. Whereas change detection was approached by extracting proper features from multisensor images such that they result to be comparable (at a given level of abstraction) even if extracted from images acquired by different sensors. In order to illustrate the results, a data set made up of a QuickBird and a WorldView-2 images - acquired in 2006 and 2010 respectively - over an area located in the Trentino region of Italy were used. However, the proposed approach is thought to be exportable to multitemporal images coming from passive sensors other than the two mentioned above. The experimental results obtained on the QuickBird and WorlView-2 image pair are accurate. Thus opening to further experiments on multitemporal images acquired by other sensors.

  9. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  10. Multisensor configurations for early sniper detection

    NASA Astrophysics Data System (ADS)

    Lindgren, D.; Bank, D.; Carlsson, L.; Dulski, R.; Duval, Y.; Fournier, G.; Grasser, R.; Habberstad, H.; Jacquelard, C.; Kastek, M.; Otterlei, R.; Piau, G.-P.; Pierre, F.; Renhorn, I.; Sjöqvist, L.; Steinvall, O.; Trzaskawka, P.

    2011-11-01

    This contribution reports some of the fusion results from the EDA SNIPOD project, where different multisensor configurations for sniper detection and localization have been studied. A project aim has been to cover the whole time line from sniper transport and establishment to shot. To do so, different optical sensors with and without laser illumination have been tested, as well as acoustic arrays and solid state projectile radar. A sensor fusion node collects detections and background statistics from all sensors and employs hypothesis testing and multisensor estimation programs to produce unified and reliable sniper alarms and accurate sniper localizations. Operator interfaces that connect to the fusion node should be able to support both sniper countermeasures and the guidance of personnel to safety. Although the integrated platform has not been actually built, sensors have been evaluated at common field trials with military ammunitions in the caliber range 5.56 to 12.7 mm, and at sniper distances up to 900 m. It is concluded that integrating complementary sensors for pre- and postshot sniper detection in a common system with automatic detection and fusion will give superior performance, compared to stand alone sensors. A practical system is most likely designed with a cost effective subset of available complementary sensors.

  11. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach. PMID:24116535

  12. Design and programming of systolic array cells for signal processing

    SciTech Connect

    Smith, R.A.W.

    1989-01-01

    This thesis presents a new methodology for the design, simulation, and programming of systolic arrays in which the algorithms and architecture are simultaneously optimized. The algorithms determine the initial architecture, and simulation is used to optimize the architecture. The simulator provides a register-transfer level model of a complete systolic array computation. To establish the validity of this design methodology two novel programmable systolic array cells were designed and programmed. The cells were targeted for applications in high-speed signal processing and associated matrix computations. A two-chip programmable systolic array cell using a 16-bit multiplier-accumulator chip and a semi-custom VLSI controller chip was designed and fabricated. A low chip count allows large arrays to be constructed, but the cell is flexible enough to be a building-block for either one- or two-dimensional systolic arrays. Another more flexible and powerful cell using a 32-bit floating-point processor and a second VLSI controller chip was also designed. It contains several architectural features that are unique in a systolic array cell: (1) each instruction is 32 bits, yet all resources can be updated every cycle, (2) two on-chip interchangeable memories are used, and (3) one input port can be used as either a global or local port. The key issues involved in programming the cells are analyzed in detail. A set of modules is developed which can be used to construct large programs in an effective manner. The utility of this programming approach is demonstrated with several important examples.

  13. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  14. IN-SITU IONIC CHEMICAL ANALYSIS OF FRESH WATER VIA A NOVEL COMBINED MULTI-SENSOR / SIGNAL PROCESSING ARCHITECTURE

    NASA Astrophysics Data System (ADS)

    Mueller, A. V.; Hemond, H.

    2009-12-01

    The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing

  15. Efficient true-time-delay adaptive array processing

    NASA Astrophysics Data System (ADS)

    Wagner, Kelvin H.; Kraut, Shawn; Griffiths, Lloyd J.; Weaver, Samuel P.; Weverka, Robert T.; Sarto, Anthony W.

    1996-11-01

    We present a novel and efficient approach to true-time-delay (TTD) beamforming for large adaptive phased arrays with N elements, for application in radar, sonar, and communication. This broadband and efficient adaptive method for time-delay array processing algorithm decreases the number of tapped delay lines required for N-element arrays form N to only 2, producing an enormous savings in optical hardware, especially for large arrays. This new adaptive system provides the full NM degrees of freedom of a conventional N element time delay beamformer with M taps, each, enabling it to fully and optimally adapt to an arbitrary complex spatio-temporal signal environment that can contain broadband signals, noise, and narrowband and broadband jammers, all of which can arrive from arbitrary angles onto an arbitrarily shaped array. The photonic implementation of this algorithm uses index gratings produce in the volume of photorefractive crystals as the adaptive weights in a TTD beamforming network, 1 or 2 acousto-optic devices for signal injection, and 1 or 2 time-delay-and- integrate detectors for signal extraction. This approach achieves significant reduction in hardware complexity when compared to systems employing discrete RF hardware for the weights or when compared to alternative optical systems that typically use N channel acousto-optic deflectors.

  16. Processing difficulties and instability of carbohydrate microneedle arrays

    PubMed Central

    Donnelly, Ryan F.; Morrow, Desmond I.J.; Singh, Thakur R.R.; Migalska, Katarzyna; McCarron, Paul A.; O’Mahony, Conor; Woolfson, A. David

    2010-01-01

    Background A number of reports have suggested that many of the problems currently associated with the use of microneedle (MN) arrays for transdermal drug delivery could be addressed by using drug-loaded MN arrays prepared by moulding hot melts of carbohydrate materials. Methods In this study, we explored the processing, handling, and storage of MN arrays prepared from galactose with a view to clinical application. Results Galactose required a high processing temperature (160°C), and molten galactose was difficult to work with. Substantial losses of the model drugs 5-aminolevulinic acid (ALA) and bovine serum albumin were incurred during processing. While relatively small forces caused significant reductions in MN height when applied to an aluminium block, this was not observed during their relatively facile insertion into heat-stripped epidermis. Drug release experiments using ALA-loaded MN arrays revealed that less than 0.05% of the total drug loading was released across a model silicone membrane. Similarly, only low amounts of ALA (approximately 0.13%) and undetectable amounts of bovine serum albumin were delivered when galactose arrays were combined with aqueous vehicles. Microscopic inspection of the membrane following release studies revealed that no holes could be observed in the membrane, indicating that the partially dissolved galactose sealed the MN-induced holes, thus limiting drug delivery. Indeed, depth penetration studies into excised porcine skin revealed that there was no significant increase in ALA delivery using galactose MN arrays, compared to control (P value < 0.05). Galactose MNs were unstable at ambient relative humidities and became adhesive. Conclusion The processing difficulties and instability encountered in this study are likely to preclude successful clinical application of carbohydrate MNs. The findings of this study are of particular importance to those in the pharmaceutical industry involved in the design and formulation of

  17. Advancing archaeological geophysics: Interpreting the archaeological landscape, ground-penetrating radar data processing, and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Ernenwein, Eileen G.

    The human past has been the subject of scientific inquiry for centuries, and has long been approached by the study of material remains from traditional archaeological excavations. In recent decades the advancing fields of geophysics and geographic information systems have greatly improved the archaeological toolkit, and research to improve these methods is ongoing. This dissertation focuses on important aspects of geophysical survey as an approach to landscape-scale archaeology, each presented as stand-alone scientific papers that utilize a 1.2 hectare four-dimensional (ground-penetrating radar, magnetometry, magnetic susceptibility, and conductivity) dataset collected at Pueblo Escondido, a large prehistoric village of the Mogollon culture in southern New Mexico. Chapter 2 presents a case study showing the benefits of multidimensional geophysical surveys over large areas at archaeological sites. When paired with traditional archaeological excavations, it is possible to interpret the archaeological landscape on a much broader scale than is possible using excavations alone. At Pueblo Escondido, this approach led to a revised understanding of the architectural remains with broad regional significance. Chapter 3 describes new problems related to GPR surveys over large areas or extended periods of time, including issues related to correcting trace misalignments, edge discontinuities, and striping. Data processing solutions are offered. Chapter 4 presents an exploration of image classification methods for integrating multiple geophysical datasets. Unsupervised classification utilizing K-means cluster analysis and supervised classification using Mahalanobis Distance are described. The latter yielded a predictive model of archaeological features and identified some features that were not easily identified in the original datasets.

  18. Vehicle passes detector based on multi-sensor analysis

    NASA Astrophysics Data System (ADS)

    Bocharov, D.; Sidorchuk, D.; Konovalenko, I.; Koptelov, I.

    2015-02-01

    The study concerned deals with a new approach to the problem of detecting vehicle passes in vision-based automatic vehicle classification system. Essential non-affinity image variations and signals from induction loop are the events that can be considered as detectors of an object presence. We propose several vehicle detection techniques based on image processing and induction loop signal analysis. Also we suggest a combined method based on multi-sensor analysis to improve vehicle detection performance. Experimental results in complex outdoor environments show that the proposed multi-sensor algorithm is effective for vehicles detection.

  19. Multisensor Fire Observations

    NASA Technical Reports Server (NTRS)

    Boquist, C.

    2004-01-01

    This DVD includes animations of multisensor fire observations from the following satellite sources: Landsat, GOES, TOMS, Terra, QuikSCAT, and TRMM. Some of the animations are included in multiple versions of a short video presentation on the DVD which focuses on the Hayman, Rodeo-Chediski, and Biscuit fires during the 2002 North American fire season. In one version of the presentation, MODIS, TRMM, GOES, and QuikSCAT data are incorporated into the animations of these wildfires. These data products provided rain, wind, cloud, and aerosol data on the fires, and monitored the smoke and destruction created by them. Another presentation on the DVD consists of a panel discussion, in which experts from academia, NASA, and the U.S. Forest Service answer questions on the role of NASA in fighting forest fires, the role of the Terra satellite and its instruments, including the Moderate Resolution Imaging Spectroradiometer (MODIS), in fire fighting decision making, and the role of fire in the Earth's climate. The third section of the DVD features several animations of fires over the years 2001-2003, including animations of global and North American fires, and specific fires from 2003 in California, Washington, Montana, and Arizona.

  20. Multisensor image cueing (MUSIC)

    NASA Astrophysics Data System (ADS)

    Rodvold, David; Patterson, Tim J.

    2002-07-01

    There have been many years of research and development in the Automatic Target Recognition (ATR) community. This development has resulted in numerous algorithms to perform target detection automatically. The morphing of the ATR acronym to Aided Target Recognition provides a succinct commentary regarding the success of the automatic target recognition research. Now that the goal is aided recognition, many of the algorithms which were not able to provide autonomous recognition may now provide valuable assistance in cueing a human analyst where to look in the images under consideration. This paper describes the MUSIC system being developed for the US Air Force to provide multisensor image cueing. The tool works across multiple image phenomenologies and fuses the evidence across the set of available imagery. MUSIC is designed to work with a wide variety of sensors and platforms, and provide cueing to an image analyst in an information-rich environment. The paper concentrates on the current integration of algorithms into an extensible infrastructure to allow cueing in multiple image types.

  1. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable. PMID:26737215

  2. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  3. Multi-sensor analysis of urban ecosystems

    USGS Publications Warehouse

    Gallo, K.; Ji, L.

    2004-01-01

    This study examines the synthesis of multiple space-based sensors to characterize the urban environment Single scene data (e.g., ASTER visible and near-IR surface reflectance, and land surface temperature data), multi-temporal data (e.g., one year of 16-day MODIS and AVHRR vegetation index data), and DMSP-OLS nighttime light data acquired in the early 1990s and 2000 were evaluated for urban ecosystem analysis. The advantages of a multi-sensor approach for the analysis of urban ecosystem processes are discussed.

  4. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  5. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  6. Superresolution with seismic arrays using empirical matched field processing

    NASA Astrophysics Data System (ADS)

    Harris, David B.; Kvaerna, Tormod

    2010-09-01

    Scattering and refraction of seismic waves can be exploited with empirical-matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term `superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2 per cent accuracy of 549 explosions conducted by closely spaced mines in northwest Russia. The mines are observed at 340-410 km range and are separated by as little as 3 km. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber (FK) and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane-wave model.

  7. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  8. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    SciTech Connect

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  9. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  10. Multisensor classification of sedimentary rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    1988-01-01

    A comparison is made between linear discriminant analysis and supervised classification results based on signatures from the Landsat TM, the Thermal Infrared Multispectral Scanner (TIMS), and airborne SAR, alone and combined into extended spectral signatures for seven sedimentary rock units exposed on the margin of the Wind River Basin, Wyoming. Results from a linear discriminant analysis showed that training-area classification accuracies based on the multisensor data were improved an average of 15 percent over TM alone, 24 percent over TIMS alone, and 46 percent over SAR alone, with similar improvement resulting when supervised multisensor classification maps were compared to supervised, individual sensor classification maps. When training area signatures were used to map spectrally similar materials in an adjacent area, the average classification accuracy improved 19 percent using the multisensor data over TM alone, 2 percent over TIMS alone, and 11 percent over SAR alone. It is concluded that certain sedimentary lithologies may be accurately mapped using a single sensor, but classification of a variety of rock types can be improved using multisensor data sets that are sensitive to different characteristics such as mineralogy and surface roughness.

  11. Post-digital image processing based on microlens array

    NASA Astrophysics Data System (ADS)

    Shi, Chaiyuan; Xu, Feng

    2014-10-01

    Benefit from the attractive features such as compact volume, thin and lightweight, the imaging systems based on microlens array have become an active area of research. However, current imaging systems based on microlens array have insufficient imaging quality so that it cannot meet the practical requirements in most applications. As a result, the post-digital image processing for image reconstruction from the low-resolution sub-image sequence becomes particularly important. In general, the post-digital image processing mainly includes two parts: the accurate estimation of the motion parameters between the sub-image sequence and the reconstruction of high resolution image. In this paper, given the fact that the preprocessing of the unit image can make the edge of the reconstructed high-resolution image clearer, the low-resolution images are preprocessed before the post-digital image processing. Then, after the processing of the pixel rearrange method, a high-resolution image is obtained. From the result, we find that the edge of the reconstructed high-resolution image is clearer than that without preprocessing.

  12. Solution processed semiconductor alloy nanowire arrays for optoelectronic applications

    NASA Astrophysics Data System (ADS)

    Shimpi, Paresh R.

    In this dissertation, we use ZnO nanowire as a model system to investigate the potential of solution routes for bandgap engineering in semiconductor nanowires. Excitingly, successful Mg-alloying into ZnO nanowire arrays has been achieved using a two-step sequential hydrothermal method at low temperature (<155°C) without using post-annealing process. Evidently, both room temperature and 40 K photoluminescence (PL) spectroscopy revealed enhanced and blue-shifted near-band-edge ultraviolet (NBE UV) emission in the Mg-alloyed ZnO (ZnMgO) nanowire arrays, compared with ZnO nanowires. The specific template of densely packed ZnO nanowires is found to be instrumental in achieving the Mg alloying in low temperature solution process. By optimizing the density of ZnO nanowires and precursor concentration, 8-10 at.% of Mg content has been achieved in ZnMgO nanowires. Post-annealing treatment is conducted in oxygen-rich and oxygen-deficient environment at different temperatures and time durations on silicon and quartz substrates in order to study the structural and optical property evolution in ZnMgO nanowire arrays. Vacuum annealed ZnMgO nanowires on both substrates retained their hexagonal structures and PL results showed the enhanced but red-shifted NBE UV emission compared to ZnO nanowires with visible emission nearly suppressed, suggesting the reduced defects concentration and improvement in crystallinity of the nanowires. On the contrast, for ambient annealed ZnMgO nanowires on silicon substrate, as the annealing temperature increased from 400°C to 900°C, intensity of visible emission peak across blue-green-yellow-red band (˜400-660 nm) increased whereas intensity of NBE UV peak decreased and completely got quenched. This might be due to interface diffusion of oxidized Si (SiOx) and formation of (Zn,Mg)1.7SiO4 epitaxially overcoated around individual ZnMgO nanowire. On the other hand, ambient annealed ZnMgO nanowires grown on quartz showed a ˜6-10 nm blue-shift in

  13. SAR processing with stepped chirps and phased array antennas.

    SciTech Connect

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  14. Array Processing in the Cloud: the rasdaman Approach

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Dumitru, Alex

    2015-04-01

    The multi-dimensional array data model is gaining more and more attention when dealing with Big Data challenges in a variety of domains such as climate simulations, geographic information systems, medical imaging or astronomical observations. Solutions provided by classical Big Data tools such as Key-Value Stores and MapReduce, as well as traditional relational databases, proved to be limited in domains associated with multi-dimensional data. This problem has been addressed by the field of array databases, in which systems provide database services for raster data, without imposing limitations on the number of dimensions that a dataset can have. Examples of datasets commonly handled by array databases include 1-dimensional sensor data, 2-D satellite imagery, 3-D x/y/t image time series as well as x/y/z geophysical voxel data, and 4-D x/y/z/t weather data. And this can grow as large as simulations of the whole universe when it comes to astrophysics. rasdaman is a well established array database, which implements many optimizations for dealing with large data volumes and operation complexity. Among those, the latest one is intra-query parallelization support: a network of machines collaborate for answering a single array database query, by dividing it into independent sub-queries sent to different servers. This enables massive processing speed-ups, which promise solutions to research challenges on multi-Petabyte data cubes. There are several correlated factors which influence the speedup that intra-query parallelisation brings: the number of servers, the capabilities of each server, the quality of the network, the availability of the data to the server that needs it in order to compute the result and many more. In the effort of adapting the engine to cloud processing patterns, two main components have been identified: one that handles communication and gathers information about the arrays sitting on every server, and a processing unit responsible with dividing work

  15. Multi-sensor electrometer

    NASA Technical Reports Server (NTRS)

    Gompf, Raymond (Inventor); Buehler, Martin C. (Inventor)

    2003-01-01

    An array of triboelectric sensors is used for testing the electrostatic properties of a remote environment. The sensors may be mounted in the heel of a robot arm scoop. To determine the triboelectric properties of a planet surface, the robot arm scoop may be rubbed on the soil of the planet and the triboelectrically developed charge measured. By having an array of sensors, different insulating materials may be measured simultaneously. The insulating materials may be selected so their triboelectric properties cover a desired range. By mounting the sensor on a robot arm scoop, the measurements can be obtained during an unmanned mission.

  16. Room geometry inference based on spherical microphone array eigenbeam processing.

    PubMed

    Mabande, Edwin; Kowalczyk, Konrad; Sun, Haohai; Kellermann, Walter

    2013-10-01

    The knowledge of parameters characterizing an acoustic environment, such as the geometric information about a room, can be used to enhance the performance of several audio applications. In this paper, a novel method for three-dimensional room geometry inference based on robust and high-resolution beamforming techniques for spherical microphone arrays is presented. Unlike other approaches that are based on the measurement and processing of multiple room impulse responses, here, microphone array signal processing techniques for uncontrolled broadband acoustic signals are applied. First, the directions of arrival (DOAs) and time differences of arrival (TDOAs) of the direct signal and room reflections are estimated using high-resolution robust broadband beamforming techniques and cross-correlation analysis. In this context, the main challenges include the low reflected-signal to background-noise power ratio, the low energy of reflected signals relative to the direct signal, and their strong correlation with the direct signal and among each other. Second, the DOA and TDOA information is combined to infer the room geometry using geometric relations. The high accuracy of the proposed room geometry inference technique is confirmed by experimental evaluations based on both simulated and measured data for moderately reverberant rooms. PMID:24116416

  17. A facile processing way of silica needle arrays with tunable orientation by tube arrays fabrication and etching method

    SciTech Connect

    Zhu Mingwei; Gao Haigen; Li Hongwei; Xu Jiao; Chen Yanfeng

    2010-03-15

    A simple method to fabricate silica micro/nano-needle arrays (SNAs) is presented based on tube-etching mechanism. Using silica fibers as templates, highly aligned and free-standing needle arrays are created over large area by simple processes of polymer infiltration, cutting, chemical etching and polymer removal. Their sizes and orientations can be arbitrarily and precisely tuned by simply selecting fiber sizes and the cutting directions, respectively. This technique enables the needle arrays with special morphology to be fabricated in a greatly facile way, thereby offers them the potentials in various applications, such as optic, energy harvesting, sensors, etc. As a demonstration, the super hydrophobic property of PDMS treated SNAs is examined. - Graphical abstract: Silica needle arrays are fabricated by tube arrays fabrication and etching method. They show super hydrophobic property after being treated with PDMS.

  18. Multisensor Fusion for Change Detection

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.

    2005-12-01

    Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach

  19. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  20. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the

  1. Non-linear, adaptive array processing for acoustic interference suppression.

    PubMed

    Hoppe, Elizabeth; Roan, Michael

    2009-06-01

    A method is introduced where blind source separation of acoustical sources is combined with spatial processing to remove non-Gaussian, broadband interferers from space-time displays such as bearing track recorder displays. This differs from most standard techniques such as generalized sidelobe cancellers in that the separation of signals is not done spatially. The algorithm performance is compared to adaptive beamforming techniques such as minimum variance distortionless response beamforming. Simulations and experiments using two acoustic sources were used to verify the performance of the algorithm. Simulations were also used to determine the effectiveness of the algorithm under various signal to interference, signal to noise, and array geometry conditions. A voice activity detection algorithm was used to benchmark the performance of the source isolation.

  2. Joint multisensor exploitation for mine detection

    NASA Astrophysics Data System (ADS)

    Beaven, Scott G.; Stocker, Alan D.; Winter, Edwin M.

    2004-09-01

    Robust, timely, and remote detection of mines and minefields is central to both tactical and humanitarian demining efforts, yet remains elusive for single-sensor systems. Here we present an approach to jointly exploit multisensor data for detection of mines from remotely sensed imagery. LWIR, MWIR, laser, multispectral, and radar sensor have been applied individually to the mine detection and each has shown promise for supporting automated detection. However, none of these sources individually provides a full solution for automated mine detection under all expected mine, background and environmental conditions. Under support from Night Vision and Electronic Sensors Directorate (NVESD) we have developed an approach that, through joint exploitation of multiple sensors, improves detection performance over that achieved from a single sensor. In this paper we describe the joint exploitation method, which is based on fundamental detection theoretic principles, demonstrate the strength of the approach on imagery from minefields, and discuss extensions of the method to additional sensing modalities. The approach uses pre-threshold anomaly detector outputs to formulate accurate models for marginal and joint statistics across multiple detection or sensor features. This joint decision space is modeled and decision boundaries are computed from measured statistics. Since the approach adapts the decision criteria based on the measured statistics and no prior target training information is used, it provides a robust multi-algorithm or multisensor detection statistic. Results from the joint exploitation processing using two different imaging sensors over surface mines acquired by NVESD will be presented to illustrate the process. The potential of the approach to incorporate additional sensor sources, such as radar, multispectral and hyperspectral imagery is also illustrated.

  3. Array processing of teleseismic body waves with the USArray

    NASA Astrophysics Data System (ADS)

    Pavlis, Gary L.; Vernon, Frank L.

    2010-07-01

    We introduce a novel method of array processing for measuring arrival times and relative amplitudes of teleseismic body waves recorded on large aperture seismic arrays. The algorithm uses a robust stacking algorithm with three features: (1) an initial 'reference' signal is required for initial alignment by cross-correlation; (2) a robust stacking method is used that penalizes signals that are not well matched to the stack; and (3) an iterative procedure alternates between cross-correlation with the current stack and the robust stacking algorithm. This procedure always converges in a few iterations making it well suited for interactive processing. We describe concepts behind a graphical interface developed to utilize this algorithm for processing body waves. We found it was important to compute several data quality metrics and allow the analyst to sort on these metrics. This is combined with a 'pick cutoff' function that simplifies data editing. Application of the algorithm to data from the USArray show four features of this method. (1) The program can produce superior results to that produced by a skilled analyst in approximately 1/5 of the time required for conventional interactive picking. (2) We show an illustrative example comparing residuals from S and SS for an event from northern Chile. The SS data show a remarkable ±10 s residual pattern across the USArray that we argue is caused by propagation approximately parallel to the subduction zones in Central and South America. (3) Quality metrics were found to be useful in identifying data problems. (4) We analyzed 50 events from the Tonga-Fiji region to compare residuals produced by this new algorithm with those measured by interactive picking. Both sets of residuals are approximately normally distributed, but corrupted by about 5% outliers. The scatter of the data estimated by waveform correlation was found to be approximately 1/2 that of the hand picked data. The outlier populations of both data sets are

  4. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  5. Adaptive beamforming for array signal processing in aeroacoustic measurements.

    PubMed

    Huang, Xun; Bai, Long; Vinogradov, Igor; Peers, Edward

    2012-03-01

    Phased microphone arrays have become an important tool in the localization of noise sources for aeroacoustic applications. In most practical aerospace cases the conventional beamforming algorithm of the delay-and-sum type has been adopted. Conventional beamforming cannot take advantage of knowledge of the noise field, and thus has poorer resolution in the presence of noise and interference. Adaptive beamforming has been used for more than three decades to address these issues and has already achieved various degrees of success in areas of communication and sonar. In this work an adaptive beamforming algorithm designed specifically for aeroacoustic applications is discussed and applied to practical experimental data. It shows that the adaptive beamforming method could save significant amounts of post-processing time for a deconvolution method. For example, the adaptive beamforming method is able to reduce the DAMAS computation time by at least 60% for the practical case considered in this work. Therefore, adaptive beamforming can be considered as a promising signal processing method for aeroacoustic measurements.

  6. Damage Detection in Composite Structures with Wavenumber Array Data Processing

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Guided ultrasonic waves (GUW) have the potential to be an efficient and cost-effective method for rapid damage detection and quantification of large structures. Attractive features include sensitivity to a variety of damage types and the capability of traveling relatively long distances. They have proven to be an efficient approach for crack detection and localization in isotropic materials. However, techniques must be pushed beyond isotropic materials in order to be valid for composite aircraft components. This paper presents our study on GUW propagation and interaction with delamination damage in composite structures using wavenumber array data processing, together with advanced wave propagation simulations. Parallel elastodynamic finite integration technique (EFIT) is used for the example simulations. Multi-dimensional Fourier transform is used to convert time-space wavefield data into frequency-wavenumber domain. Wave propagation in the wavenumber-frequency domain shows clear distinction among the guided wave modes that are present. This allows for extracting a guided wave mode through filtering and reconstruction techniques. Presence of delamination causes spectral change accordingly. Results from 3D CFRP guided wave simulations with delamination damage in flat-plate specimens are used for wave interaction with structural defect study.

  7. A Systolic Array Architecture For Processing Sonar Narrowband Signals

    NASA Astrophysics Data System (ADS)

    Mintzer, L.

    1988-07-01

    Modern sonars relay more upon visual rather than aural contacts. Lofargrams presenting a time history of hydrophone spectral content are standard means of observing narrowband signals. However, the frequency signal "tracks" are often embedded in noise, sometimes rendering their detection difficult and time consuming. Image enhancement algorithms applied to the 'grams can yield improvements in target data presented to the observer. A systolic array based on the NCR Geometric Arithmetic Parallel Processor (GAPP), a CMOS chip that contains 72 single bit processors controlled in parallel, has been designed for evaluating image enhancement algorithms. With the processing nodes of the GAPP bearing a one-to-one correspondence with the pixels displayed on the 'gram, a very efficient SIMD architecture is realized. The low data rate of sonar displays, i.e., one line of 1000-4000 pixels per second, and the 10-MHz control clock of the GAPP provide the possibility of 107 operations per pixel in real time applications. However, this architecture cannot handle data-dependent operations efficiently. To this end a companion processor capable of efficiently executing branch operations has been designed. A simple spoke filter is simulated and applied to laboratory data with noticeable improvements in the resulting lofargram display.

  8. Multisensor evaluation research: enhancement techniques and sensor evaluation results

    NASA Astrophysics Data System (ADS)

    Duncan, Gary A.; Heidbreder, William H.; Hammack, James; Szpak, Casimir

    1996-06-01

    This paper compares imagery from four mapping sensors and evaluates the utility of the imagery to support the function of cartographic feature analysis. The four sensors examined are: Landsat TM, SPOT, 5 M Landsat (simulated), and 1 M electro-optical. The feature analysis process is described, and a proposed experiment designed to compare feature analysis utility is discussed. The proposed experiment includes the use of both monoscopic and stereo imagery, as well as application of visual image enhancement techniques and supporting algorithms that facilitate image interpretation. The described techniques represent an initial basis for study of more automated multispectral and multisensor techniques. Also, the applicability of using multiresolution and multisensor techniques is discussed.

  9. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  10. Proceedings of the array signal processing symposium: Treaty Verification Program

    SciTech Connect

    Harris, D.B.

    1988-02-01

    A common theme underlying the research these groups conduct is the use of propagating waves to detect, locate, image or otherwise identify features of the environment significant to their applications. The applications considered in this symposium are verification of nuclear test ban treaties, non-destructive evaluation (NDE) of manufactured components, and sonar and electromagnetic target acquisition and tracking. These proceedings cover just the first two topics. In these applications, arrays of sensors are used to detect propagating waves and to measure the characteristics that permit interpretation. The reason for using sensors arrays, which are inherently more expensive than single sensor systems, is twofold. By combining the signals from multiple sensors, it is usually possible to suppress unwanted noise, which permtis detection and analysis of waker signals. Secondly, in complicated situations in which many waves are present, arrays make it possible to separate the waves and to measure their individual characteristics (direction, velocity, etc.). Other systems (such as three-component sensors in the seismic application) can perform these functions to some extent, but none are so effective and versatile as arrays. The objectives of test ban treaty verification are to detect, locate and identify underground nuclear explosions, and to discriminate them from earthquakes and conventional chemical explosions. Two physical modes of treaty verification are considered: monitoring with arrays of seismic stations (solid earth propagation), and monitoring with arrays of acoustic (infrasound) stations (atmospheric propagation). The majority of the presentations represented in these proceeding address various aspects of the seismic verification problem.

  11. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  12. Simulation and Data Processing for Ultrasonic Phased-Arrays Applications

    NASA Astrophysics Data System (ADS)

    Chaffaï-Gargouri, S.; Chatillon, S.; Mahaut, S.; Le Ber, L.

    2007-03-01

    The use of phased-arrays techniques has considerably contributed to extend the domain of applications and the performances of ultrasonic methods on complex configurations. Their adaptability offers a great freedom for conceiving the inspection leading to a wide range of functionalities gathering electronic commutation, applications of different delay laws and so on. This advantage allows to circumvent the difficulties encountered with more classical techniques especially when the inspection is assisted by simulation at the different stages : probe design (optimization of the number and characteristics of the elements), evaluation of the performances in terms of flaw detection (zone coverage) and characterization, driving the array (computation of adapted delay laws) and finally analyzing the results (versatile model-based imaging tools allowing in particular to locate the data in the real space). The CEA is strongly involved in the development of efficient simulation-based tools adapted to these needs. In this communication we present the recent advances done at CEA in this field and show several examples of complex NDT phased arrays applications. On these cases we show the interest and the performances of simulation-helped array design, array-driving and data analysis.

  13. MITAS: multisensor imaging technology for airborne surveillance

    NASA Astrophysics Data System (ADS)

    Thomas, John D.

    1991-08-01

    MITAS, a unique and low-cost solution to the problem of collecting and processing multisensor imaging data for airborne surveillance operations has been developed, MITAS results from integrating the established and proven real-time video processing, target tracking, and sensor management software of TAU with commercially available image exploitation and map processing software. The MITAS image analysis station (IAS) supports airborne day/night reconnaissance and surveillance missions involving low-altitude collection platforms employing a suite of sensors to perform reconnaissance functions against a variety of ground and sea targets. The system will detect, locate, and recognize threats likely to be encountered in support of counternarcotic operations and in low-intensity conflict areas. The IAS is capable of autonomous, near real-time target exploitation and has the appropriate communication links to remotely located IAS systems for more extended analysis of sensor data. The IAS supports the collection, fusion, and processing of three main imaging sensors: daylight imagery (DIS), forward looking infrared (FLIR), and infrared line scan (IRLS). The MITAS IAS provides support to all aspects of the airborne surveillance mission, including sensor control, real-time image enhancement, automatic target tracking, sensor fusion, freeze-frame capture, image exploitation, target data-base management, map processing, remote image transmission, and report generation.

  14. Structure and Process of Infrared Hot Electron Transistor Arrays

    PubMed Central

    Fu, Richard

    2012-01-01

    An infrared hot-electron transistor (IHET) 5 × 8 array with a common base configuration that allows two-terminal readout integration was investigated and fabricated for the first time. The IHET structure provides a maximum factor of six in improvement in the photocurrent to dark current ratio compared to the basic quantum well infrared photodetector (QWIP), and hence it improved the array S/N ratio by the same factor. The study also showed for the first time that there is no electrical cross-talk among individual detectors, even though they share the same emitter and base contacts. Thus, the IHET structure is compatible with existing electronic readout circuits for photoconductors in producing sensitive focal plane arrays. PMID:22778655

  15. Multisensor robot navigation system

    NASA Astrophysics Data System (ADS)

    Persa, Stelian; Jonker, Pieter P.

    2002-02-01

    Almost all robot navigation systems work indoors. Outdoor robot navigation systems offer the potential for new application areas. The biggest single obstacle to building effective robot navigation systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relative free of natural or man-made interference sources. The hybrid tracker combines rate gyros and accelerometers with compass and tilt orientation sensor and DGPS system. Sensor distortions, delays and drift required compensation to achieve good results. The measurements from sensors are fused together to compensate for each other's limitations. Analysis and experimental results demonstrate the system effectiveness. The paper presents a field experiment for a low-cost strapdown-IMU (Inertial Measurement Unit)/DGPS combination, with data processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost ISA (Inertial Sensor Assembly) and because of the relatively small area of the trajectory. The scope of this experiment was to test the feasibility of an integrated DGPS/IMU system of this type and to develop a field evaluation procedure for such a combination.

  16. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  17. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    PubMed

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  18. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  19. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  20. On the design of systolic-array architectures with applications to signal processing

    SciTech Connect

    Niamat, M.Y.

    1989-01-01

    Systolic arrays are networks of processors that rhythmically compute and paw data through systems. These arrays feature the important properties of modularity, regularity, local interconnections, and a high degree of pipelining and multiprocessing. In this dissertation, several systolic arrays are proposed with applications to real-time signal processing. Specifically, these arrays are designed for the rapid computation of position velocities, accelerations, and jerks associated with motion. Real-time computations of these parameters arise in many applications, notably in the areas of robotics, image-processing, remote signal processing, and computer-controlled machines. The systolic arrays proposed in this dissertation can be classified into the linear, the triangular, and the mesh connected types. In the linear category, six different systolic designs are presented. The relative merits of these designs are discussed in detail. It is found from the analysis of these designs that each of these arrays achieves a proportional increase in time. Also, by interleaving the input data items in some of these designs, the throughput rate is further doubled. This also increases the processor utilization rate to 100%. The triangular type systolic array is found to be useful when all three parameters are to be computed simultaneously, and the mesh type, when the number of signals to be processed are extremely large. The effect of direct broadcasting of data to the processing cells is also investigated. Finally, the utility of the proposed systolic arrays is illustrated by a practical design example.

  1. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  2. Body-Attachable and Stretchable Multisensors Integrated with Wirelessly Rechargeable Energy Storage Devices.

    PubMed

    Kim, Daeil; Kim, Doyeon; Lee, Hyunkyu; Jeong, Yu Ra; Lee, Seung-Jung; Yang, Gwangseok; Kim, Hyoungjun; Lee, Geumbee; Jeon, Sanggeun; Zi, Goangseup; Kim, Jihyun; Ha, Jeong Sook

    2016-01-27

    A stretchable multisensor system is successfully demonstrated with an integrated energy-storage device, an array of microsupercapacitors that can be repeatedly charged via a wireless radio-frequency power receiver on the same stretchable polymer substrate. The integrated devices are interconnected by a liquid-metal interconnection and operate stably without noticeable performance degradation under strain due to the skin attachment, and a uniaxial strain up to 50%. PMID:26641239

  3. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  4. Characrterizing frozen ground with multisensor remote sensing

    NASA Astrophysics Data System (ADS)

    Csatho, B. M.; Ping, C.; Everett, L. R.; Kimble, J. M.; Michaelson, G.; Tremper, C.

    2006-12-01

    We have a physically based, conceptual understanding of many of the significant interactions that impact permafrost-affected soils. Our observationally based knowledge, however, is inadequate in many cases to quantify these interactions or to predict their net impact. To pursue key goals, such as understanding the response of permafrost-affected soil systems to global environmental changes and their role in the carbon balance, and to transform our conceptual understanding of these processes into quantitative knowledge, it is necessary to acquire geographically diverse sets of fundamental observations at high spatial and often temporal resolution. The main goals of the research presented here are developing methods for mapping soil and permafrost distributions in polar environment as well as characterizing glacial and perglacial geomorphology from multisensor, multiresolution remotely sensed data. The sheer amount of data and the disparate data sets (e.g., LIDAR, stereo imagery, multi- hyperspectral, and SAR imagery) make the joint interpretation (fusion) a daunting task. We combine remote sensing, pattern recognition and landscape analysis techniques for the delineation of soil landscape units and other geomorphic features, for inferring the physical properties and composition of the surface, and for generating numerical measurements of geomorphic features from remotely sensed data. Examples illustrating the concept are presented from the North Slope of Alaska and from the McMurdo Sound region in Antarctica. (1) On the North Slope, Alaska we separated different vegetative, soil and landscape units along the Haul Road. Point-source soils (pedon) data and field spectrometry data have been acquired at different units to provide ground-truth for the satellite image interpretation. (2) A vast amount of remote sensing data, such as multi- and hyperspectral (Landsat, SPOT, ASTER, HYPERION) and SAR satellite imagery (ERS, RADARSAT and JERS), high resolution topographic

  5. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  6. A solar array module fabrication process for HALE solar electric UAVs

    SciTech Connect

    Carey, P.G.; Aceves, R.C.; Colella, N.J.; Thompson, J.B.; Williams, K.A.

    1993-12-01

    We describe a fabrication process to manufacture high power to weight ratio flexible solar array modules for use on high altitude long endurance (HALE) solar electric unmanned air vehicles (UAVs). A span-loaded flying wing vehicle, known as the RAPTOR Pathfinder, is being employed as a flying test bed to expand the envelope of solar powered flight to high altitudes. It requires multiple light weight flexible solar array modules able to endure adverse environmental conditions. At high altitudes the solar UV flux is significantly enhanced relative to sea level, and extreme thermal variations occur. Our process involves first electrically interconnecting solar cells into an array followed by laminating them between top and bottom laminated layers into a solar array module. After careful evaluation of candidate polymers, fluoropolymer materials have been selected as the array laminate layers because of their inherent abilities to withstand the hostile conditions imposed by the environment.

  7. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  8. Design, processing and testing of LSI arrays, hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.

    1979-01-01

    Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.

  9. Visual programming environment for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Hall, David L.; Kasmala, Gerald

    1996-06-01

    In recent years, numerous multisensor data fusion systems have been developed for a wide variety of applications. Defense related applications include; automatic target recognition systems, identification-friend-foe-neutral, automated situation assessment and threat assessment systems, and systems for smart weapons. Non-defense applications include; robotics, condition-based maintenance, environmental monitoring, and medical diagnostics. For each of these applications, multiple sensor data are combined to achieve inferences which are not generally possible using only a single sensor. Implementation of these data fusion systems often involves a significant amount of effort. In particular, software must be developed for components such as data base access, human computer interfaces and displays, communication software, and data fusion algorithms. While commercial software packages exist to assist development of data bases, communications, and human computer interfaces, there are no general purpose packages available to support the implementation of the data fusion algorithms. This paper describes a visual programming tool developed to assist in rapid prototyping of data fusion systems. This toolkit is modeled after the popular tool, Khoros, used by the image processing community. The tool described here is written in visual C, and provides the capability to rapidly implement and apply data fusion algorithms. An application to condition based maintenance is described.

  10. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  11. Radar imaging and high-resolution array processing applied to a classical VHF-ST profiler

    NASA Astrophysics Data System (ADS)

    Hélal, D.; Crochet, M.; Luce, H.; Spano, E.

    2001-01-01

    Among the spaced antenna methods used in the field of atmospheric studies, radar interferometry has been of great interest for many authors. A first approach is to use the phase information contained in the cross-spectra between antenna output signals and to retrieve direction of arrival (DOA) of discrete scatterers. The second one introduces a phase shift between the antenna signals in order to steer the main beam of the antenna towards a desired direction. This paper deals with the later technique and presents a variant of postset beam steering (PBS) which does not require a multi-receiver system. Indeed, the data samples are taken alternately on each antenna by means of high-commutation-rate switches inserted before a unique receiver. This low-cost technique is called ``sequential PBS'' (SPBS) and has been implemented on two classical VHF-ST radars. The present paper shows that high flexibility of SPBS in angular scanning allows to perform radar imaging. Despite a limited maximum range due to the antennas' scanning, the collected data give a view of the boundary layer and the lower troposphere over a wide horizontal extent, with characteristic horizontally stratified structures in the lower troposphere. These structures are also detected by application of high-resolution imaging processing such as Capon's beamforming or Multiple Signal Classification algorithm. The proposed method can be a simple way to enhance the versatility of classical DBS radars in order to extend them for multi-sensor applications and local meteorology.

  12. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  13. Monolithic optical phased-array transceiver in a standard SOI CMOS process.

    PubMed

    Abediasl, Hooman; Hashemi, Hossein

    2015-03-01

    Monolithic microwave phased arrays are turning mainstream in automotive radars and high-speed wireless communications fulfilling Gordon Moores 1965 prophecy to this effect. Optical phased arrays enable imaging, lidar, display, sensing, and holography. Advancements in fabrication technology has led to monolithic nanophotonic phased arrays, albeit without independent phase and amplitude control ability, integration with electronic circuitry, or including receive and transmit functions. We report the first monolithic optical phased array transceiver with independent control of amplitude and phase for each element using electronic circuitry that is tightly integrated with the nanophotonic components on one substrate using a commercial foundry CMOS SOI process. The 8 × 8 phased array chip includes thermo-optical tunable phase shifters and attenuators, nano-photonic antennas, and dedicated control electronics realized using CMOS transistors. The complex chip includes over 300 distinct optical components and over 74,000 distinct electrical components achieving the highest level of integration for any electronic-photonic system.

  14. Monolithic optical phased-array transceiver in a standard SOI CMOS process.

    PubMed

    Abediasl, Hooman; Hashemi, Hossein

    2015-03-01

    Monolithic microwave phased arrays are turning mainstream in automotive radars and high-speed wireless communications fulfilling Gordon Moores 1965 prophecy to this effect. Optical phased arrays enable imaging, lidar, display, sensing, and holography. Advancements in fabrication technology has led to monolithic nanophotonic phased arrays, albeit without independent phase and amplitude control ability, integration with electronic circuitry, or including receive and transmit functions. We report the first monolithic optical phased array transceiver with independent control of amplitude and phase for each element using electronic circuitry that is tightly integrated with the nanophotonic components on one substrate using a commercial foundry CMOS SOI process. The 8 × 8 phased array chip includes thermo-optical tunable phase shifters and attenuators, nano-photonic antennas, and dedicated control electronics realized using CMOS transistors. The complex chip includes over 300 distinct optical components and over 74,000 distinct electrical components achieving the highest level of integration for any electronic-photonic system. PMID:25836869

  15. Post-processing of guided wave array data for high resolution pipe inspection.

    PubMed

    Velichko, Alexander; Wilcox, Paul D

    2009-12-01

    This paper describes a method for processing data from a guided wave transducer array on a pipe. The raw data set from such an array contains the full matrix of time-domain signals from each transmitter-receiver combination. It is shown that for certain configurations of an array, the total focusing method can be applied, which allows the array to be focused at every point on a pipe in both transmission and reception. The effect of array configuration parameters on the sensitivity of the proposed method to random and coherent noise is discussed. Experimental results are presented using electromagnetic acoustic transducers for exciting and detecting the S(0) Lamb wave mode in a 12-in. diameter steel pipe at 200 kHz excitation frequency. The results show that using the imaging algorithm, a 2-mm (0.08 wavelength) diameter half-thickness hole can be detected.

  16. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  17. View and sensor planning for multi-sensor surface inspection

    NASA Astrophysics Data System (ADS)

    Gronle, Marc; Osten, Wolfgang

    2016-06-01

    Modern manufacturing processes enable the precise fabrication of high-value parts with high precision and performance. At the same time, the demand for flexible on-demand production of individual objects is continuously increasing. These requirements can only be met if inspection systems provide appropriate answers. One solution is the use of flexible, multi-sensor setups where multiple optical sensors with different fields of application are combined in one system. However, the challenge is then to assist the user in planning the inspection for individual parts. A manual planning requires an expert knowledge of the performance and functionality of every sensor. Therefore, software assistant systems help the user to objectively select the right sensors for a given inspection task. The planning step becomes still more difficult if the manufactured part has a complex form. The implication is that a sensor’s position must also be part of the planning process since it significantly influences the quality of the inspection. This paper describes a view and sensor planning approach for a multi-sensor surface inspection system in the context of optical topography measurements in the micro- and meso-scale range. In order to realize an online processing of the assistant system, a significant part of the calculations are done on the graphics processing unit (GPU).

  18. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  19. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  20. Model-based Processing of Microcantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Sinensky, A K; Lee, C L; Rudd, R E; Burnham, A K

    2005-04-27

    We have developed a model-based processor (MBP) for a microcantilever-array sensor to detect target species in solution. We perform a proof-of-concept experiment, fit model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest, averaged deflection data and multi-channel data. For this evaluation we extract model parameters via a model-based estimation, perform a Gauss-Markov simulation, design the optimal MBP and apply it to measured experimental data. The performance of the MBP in the multi-channel case is evaluated by comparison to a ''smoother'' (averager) typically used for microcantilever signal analysis. It is shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, apart from a correctable systematic bias error.

  1. Advanced techniques for array processing. Final report, 1 Mar 89-30 Apr 91

    SciTech Connect

    Friedlander, B.

    1991-05-30

    Array processing technology is expected to be a key element in communication systems designed for the crowded and hostile environment of the future battlefield. While advanced array processing techniques have been under development for some time, their practical use has been very limited. This project addressed some of the issues which need to be resolved for a successful transition of these promising techniques from theory into practice. The main problem which was studied was that of finding the directions of multiple co-channel transmitters from measurements collected by an antenna array. Two key issues related to high-resolution direction finding were addressed: effects of system calibration errors, and effects of correlation between the received signals due to multipath propagation. A number of useful theoretical performance analysis results were derived, and computationally efficient direction estimation algorithms were developed. These results include: self-calibration techniques for antenna arrays, sensitivity analysis for high-resolution direction finding, extensions of the root-MUSIC algorithm to arbitrary arrays and to arrays with polarization diversity, and new techniques for direction finding in the presence of multipath based on array interpolation. (Author)

  2. Hybridization process for back-illuminated silicon Geiger-mode avalanche photodiode arrays

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Westhoff, Richard C.; Loomis, Andrew H.; Young, Douglas J.; Ciampi, Joseph S.; Aull, Brian F.; Reich, Robert K.

    2010-04-01

    We present a unique hybridization process that permits high-performance back-illuminated silicon Geiger-mode avalanche photodiodes (GM-APDs) to be bonded to custom CMOS readout integrated circuits (ROICs) - a hybridization approach that enables independent optimization of the GM-APD arrays and the ROICs. The process includes oxide bonding of silicon GM-APD arrays to a transparent support substrate followed by indium bump bonding of this layer to a signal-processing ROIC. This hybrid detector approach can be used to fabricate imagers with high-fill-factor pixels and enhanced quantum efficiency in the near infrared as well as large-pixel-count, small-pixel-pitch arrays with pixel-level signal processing. In addition, the oxide bonding is compatible with high-temperature processing steps that can be used to lower dark current and improve optical response in the ultraviolet.

  3. Airborne Multisensor Pod System (AMPS) data management overview

    SciTech Connect

    Wiberg, J.D.; Blough, D.K.; Daugherty, W.R.; Hucks, J.A.; Gerhardstein, L.H.; Meitzler, W.D.; Melton, R.B.; Shoemaker, S.V.

    1994-09-01

    An overview of the Data Management Plan for the Airborne Multisensor Pod System (AMPS) pro-grain is provided in this document. The Pacific Northwest Laboratory (PNL) has been assigned the responsibility of data management for the program, which includes defining procedures for data management and data quality assessment. Data management is defined as the process of planning, acquiring, organizing, qualifying and disseminating data. The AMPS program was established by the U.S. Department of Energy (DOE), Office of Arms Control and Non-Proliferation (DOE/AN) and is integrated into the overall DOE AN-10.1 technology development program. Sensors used for collecting the data were developed under the on-site inspection, effluence analysis, and standoff sensor program, the AMPS program interacts with other technology programs of DOE/NN-20. This research will be conducted by both government and private industry. AMPS is a research and development program, and it is not intended for operational deployment, although the sensors and techniques developed could be used in follow-on operational systems. For a complete description of the AMPS program, see {open_quotes}Airborne Multisensor Pod System (AMPS) Program Plan{close_quotes}. The primary purpose of the AMPS is to collect high-quality multisensor data to be used in data fusion research to reduce interpretation problems associated with data overload and to derive better information than can be derived from any single sensor. To collect the data for the program, three wing-mounted pods containing instruments with sensors for collecting data will be flight certified on a U.S. Navy RP-3A aircraft. Secondary objectives of the AMPS program are sensor development and technology demonstration. Pod system integrators and instrument developers will be interested in the performance of their deployed sensors and their supporting data acquisition equipment.

  4. Fiber-optic remote multisensor system based on an acousto-optic tunable filter (AOTF)

    SciTech Connect

    Moreau, F.; Moreau, S.M.; Hueber, D.M.; Vo-dinh, T.

    1996-10-01

    This paper describes a new fiber-optic multisensor based on an acousto-optic tunable filter (AOTF) and capable of remote sensing using a multioptical fiber array (MOFA). A two-dimensional charge-coupled device (CCD) was used as a detector, and the AOTF was used as a wavelength selector. Unlike a tunable grating or prism-based monochromator, an AOTF has no moving parts, and an AOTF can be rapidly tuned to any wavelength in its operating range within microseconds. The large aperture of the AOTF allows the optical signal from over 100 fiber-optic sensors to be measured simultaneously. These characteristics, combined with their small size, make AOTFs an important new alternative to conventional monochromators, especially for spectral multisensing and imaging. A prototype fiber-optic multisensor system has been developed, and its feasibility for simultaneous detection of molecular luminescence signal via fiber-optic probes is demonstrated. {copyright} {ital 1996} {ital Society for Applied Spectroscopy}

  5. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  6. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  7. Multisensor fusion for 3-D defect characterization using wavelet basis function neural networks

    NASA Astrophysics Data System (ADS)

    Lim, Jaein; Udpa, Satish S.; Udpa, Lalita; Afzal, Muhammad

    2001-04-01

    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, has the ability to draw inferences that may not be feasible with data from a single sensor alone. In this paper, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL. Data is fused at the signal level. If the flux is oriented axially, the samples of the axial signal are measured along a direction parallel to the flaw, while the circumferential signal is measured in a direction that is perpendicular to the flaw. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. A boundary extraction algorithm is used to extract the defect areas in the image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. Finally, a wavelet basis function (WBF) neural network is employed to map the complex valued image appropriately to obtain the geometrical profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. Results show the effectiveness of the approach.

  8. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  9. CD uniformity improvement of dense contact array in negative tone development process

    NASA Astrophysics Data System (ADS)

    Tsai, Fengnien; Yeh, Teng-hao; Yang, C. C.; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2015-03-01

    Layout pattern density impacts mask critical dimension uniformity (MCDU) as well as wafer critical dimension uniformity (WCDU) performances in some aspects. In patterning the dense contact array with negative tone development (NTD) process, the abrupt pattern density change around the array edge of a NTD clear tone reticle arises as a very challenging issue for achieving satisfactory WCDU. Around the array boundary, apart from the MCDU greatly impacted by the abrupt pattern density change, WCDU in lithographic process is also significantly influenced by the optical flare and chemical flare effects. This study investigates the pattern density effect induced MCDU and WCDU variations. Various pattern densities are generated by the combination of fixed array pattern and various sub-resolution assist feature (SRAF) extension regions for quantifying the separated WCD variation budget contributed by MCD variation, chemical flare effect and optical flare effect. With the proper pattern density modulation outside the array pattern on a clear tone reticle, MCD variation across array can be eliminated, optical flare and chemical flare effects induced WCD variation is also greatly suppressed.

  10. Critical Dimension Control for 32 nm Node Random Contact Hole Array Using Resist Reflow Process

    NASA Astrophysics Data System (ADS)

    Park, Joon-Min; Kang, Young-Min; Hong, Joo-Yoo; Oh, Hye-Keun

    2008-02-01

    A 50 nm contact hole (CH) random array fabricated by resist reflow process (RRP) was studied to produce 32 nm node devices. RRP is widely used for mass production of semiconductor devices, but RRP has some restrictions because the reflow strongly depends on the array, pitch, and shape of CH. Thus, we must have full knowledge on pattern dependency after RRP, and we need to have an optimum optical proximity corrected mask including RRP to compensate the pattern dependency in random array. To fabricate optimum optical proximity- and RRP-corrected mask, we must have a better understanding of how much resist flows and CH locations after RRP. A simulation is carried out to correctly predict the RRP result by including RRP parameters such as viscosity, adhesion force, surface tension, and location of CH. As a result, we obtained uniform 50 nm CH patterns even for the random and differently shaped CH arrays by optical proximity-corrected RRP.

  11. Reducing multi-sensor data to a single time course that reveals experimental effects

    PubMed Central

    2013-01-01

    Background Multi-sensor technologies such as EEG, MEG, and ECoG result in high-dimensional data sets. Given the high temporal resolution of such techniques, scientific questions very often focus on the time-course of an experimental effect. In many studies, researchers focus on a single sensor or the average over a subset of sensors covering a “region of interest” (ROI). However, single-sensor or ROI analyses ignore the fact that the spatial focus of activity is constantly changing, and fail to make full use of the information distributed over the sensor array. Methods We describe a technique that exploits the optimality and simplicity of matched spatial filters in order to reduce experimental effects in multivariate time series data to a single time course. Each (multi-sensor) time sample of each trial is replaced with its projection onto a spatial filter that is matched to an observed experimental effect, estimated from the remaining trials (Effect-Matched Spatial filtering, or EMS filtering). The resulting set of time courses (one per trial) can be used to reveal the temporal evolution of an experimental effect, which distinguishes this approach from techniques that reveal the temporal evolution of an anatomical source or region of interest. Results We illustrate the technique with data from a dual-task experiment and use it to track the temporal evolution of brain activity during the psychological refractory period. We demonstrate its effectiveness in separating the means of two experimental conditions, and in significantly improving the signal-to-noise ratio at the single-trial level. It is fast to compute and results in readily-interpretable time courses and topographies. The technique can be applied to any data-analysis question that can be posed independently at each sensor, and we provide one example, using linear regression, that highlights the versatility of the technique. Conclusion The approach described here combines established techniques in a

  12. Design, processing and testing of LSI arrays: Hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.

    1979-01-01

    Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.

  13. Process development for cell aggregate arrays encapsulated in a synthetic hydrogel using negative dielectrophoresis.

    PubMed

    Abdallat, Rula G; Ahmad Tajuddin, Aziela S; Gould, David H; Hughes, Michael P; Fatoyinbo, Henry O; Labeed, Fatima H

    2013-04-01

    Spatial patterning of cells is of great importance in tissue engineering and biotechnology, enabling, for example the creation of bottom-up histoarchitectures of heterogeneous cells, or cell aggregates for in vitro high-throughput toxicological and therapeutic studies within 3D microenvironments. In this paper, a single-step process for creating peelable and resilient hydrogels, encapsulating arrays of biological cell aggregates formed by negative DEP has been devised. The dielectrophoretic trapping within low-energy regions of the DEP-dot array reduces cell exposure to high field stresses while creating distinguishable, evenly spaced arrays of aggregates. In addition to using an optimal combination of PEG diacrylate pre-polymer solution concentration and a novel UV exposure mechanism, total processing time was reduced. With a continuous phase medium of PEG diacrylate at 15% v/v concentration, effective dielectrophoretic cell patterned arrays and photo-polymerisation of the mixture was achieved within a 4 min period. This unique single-step process was achieved using a 30 s UV exposure time frame within a dedicated, wide exposure area DEP light box system. To demonstrate the developed process, aggregates of yeast, human leukemic (K562) and HeLa cells were immobilised in an array format within the hydrogel. Relative cell viability for both cells within the hydrogels, after maintaining them in appropriate iso-osmotic media, over a week period was greater than 90%. PMID:23436271

  14. Array Processing for Radar Clutter Reduction and Imaging of Ice-Bed Interface

    NASA Astrophysics Data System (ADS)

    Gogineni, P.; Leuschen, C.; Li, J.; Hoch, A.; Rodriguez-Morales, F.; Ledford, J.; Jezek, K.

    2007-12-01

    A major challenge in sounding of fast-flowing glaciers in Greenland and Antarctica is surface clutter, which masks weak returns from the ice-bed interface. The surface clutter is also a major problem in sounding and imaging sub-surface interfaces on Mars and other planets. We successfully applied array-processing techniques to reduce clutter and image ice-bed interfaces of polar ice sheets. These techniques and tools have potential applications to planetary observations. We developed a radar with array-processing capability to measure thickness of fast-flowing outlet glaciers and image the ice-bed interface. The radar operates over the frequency range from 140 to 160 MHz with about an 800- Watt peak transmit power with transmit and receive antenna arrays. The radar is designed such that pulse width and duration are programmable. The transmit-antenna array is fed with a beamshaping network to obtain low sidelobes. We designed the receiver such that it can process and digitize signals for each element of an eight- channel array. We collected data over several fast-flowing glaciers using a five-element antenna array, limited by available hardpoints to mount antennas, on a Twin Otter aircraft during the 2006 field season and a four-element array on a NASA P-3 aircraft during the 2007 field season. We used both adaptive and non-adaptive signal-processing algorithms to reduce clutter. We collected data over the Jacobshavn Isbrae and other fast-flowing outlet glaciers, and successfully measured the ice thickness and imaged the ice-bed interface. In this paper, we will provide a brief description of the radar, discuss clutter-reduction algorithms, present sample results, and discuss the application of these techniques to planetary observations.

  15. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  16. a Post-Processing Technique for Guided Wave Array Data for the Inspection of Plate Structures

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2008-02-01

    The paper describes a general approach for processing data from a guided wave transducer array on a plate-like structure. It is shown that improvements in resolution are obtained at the expense of sensitivity to noise. A method of quantifying this sensitivity is presented. Experimental data obtained from a guided wave array containing electromagnetic acoustic transducers (EMAT) elements for exciting and detecting the S0 Lamb wave mode in a 5-mm thick aluminium plate are processed with different algorithms and the results are discussed. Generalization of the technique for the case of multimode media is suggested.

  17. New three-dimensional fiber probe for multisensor coordinate measurement

    NASA Astrophysics Data System (ADS)

    Ettemeyer, Andreas

    2012-08-01

    Increasing manufacturing accuracy requirements enforce the development of innovative and highly sensitive measuring tools. Especially for measurement with submicrometer accuracy, the sensor principle has to be chosen appropriately for each measurement surface. Modern multisensor coordinate measurement systems allow automatic selection of different sensor heads to measure different areas or properties of a sample. As an example, different types of optical sensors as well as tactile sensors can be used within the same measuring system. I describe different principles of optical sensors used in multisensor coordinate measurement systems as well as a new approach for tactile measurement with submicrometer accuracy. A special fiber probe has been developed. The tip of the fiber probe is formed as a sphere. The lateral position of this sphere is observed by a microscope objective and can be determined within a fraction of a micrometer. Additionally, a novel optical setup now allows the determination of the z-position of the fiber tip with submicrometer accuracy. For this purpose, an interferometer setup is used. The laser light is coupled into the optical fiber. The light exiting the fiber tip is collected by the microscope objective and superposed with a reference wave, generated directly from the laser. The result is an interference signal that is recorded by the camera and processed by a computer. With this setup, the z-displacement of the fiber sphere can be measured with an accuracy of a fraction of the laser wavelength used.

  18. Infrared-optical multisensor for autonomous landing guidance

    NASA Astrophysics Data System (ADS)

    Kerr, Richard; Pond, Duane P.; Inman, Scott

    1995-06-01

    Infrared sensors at the nominal 8 - 12 and 3 - 5 micron wavebands respectively can be shown to have complementary performance characteristics when used over a range of meteorological conditions. The infrared/optical multisensor for the autonomous landing guidance system integrates staring longwave, midwave, and visible sensors into an environmentally sealed and purged assembly. The infrared modules include specific enhancements for the detection of runways under adverse weather conditions. The sensors incorporate pixel-for-pixel overlap registration, and the fields of view match a conformal head-up display with sensor/display boresighting to within a fraction of a pixel. Tower tests will be used to characterize the sensors and gather data to support simulation and image processing efforts. After integration with other elements of the autonomous landing guidance system, flight tests will be conducted on Air Force and commercial transport aircraft. In addition to display and analog video recording, the multisensor data will be digitally captured during critical flight test phases.

  19. Multisensor fusion for vehicle-mounted mine detection

    NASA Astrophysics Data System (ADS)

    Aponte, Hilda I.; Campana, Stephen B.; McGovern, Meghan A.

    2001-10-01

    Mine Hunter/Killer (MH/K) is an Advanced Technology Demonstration (ATD) program directed by the Army Night Vision Electronic Sensors Directorate (NVESD). The MH/K system consists of a vehicle-mounted system that detects and neutralizes surface and buried anti-tank (AT) mines. The detection element in this program consists of a Close-In Detection (CID) System that relies on a multi-sensor configuration. The CID System consists of three sensors: a ground penetrating radar (GPR), a metal detector (MD) and a forward-looking IR imaging system. TRW S and ITG has provided support for analysis, testing and algorithm development for Automatic Target Recognition and sensor fusion processing. This paper presents a multi-sensor fusion approach developeby TRW under this effort. In this approach, the incoming alarms from the three sensors are segregate into five classes, based on spatial coincidence of GPR and MD alarms, and on the presence of a surface null in the GPR depth profile. This GPR null, or 'notch', is indicative of shallowly buried objects or clutter, and helps in the discrimination against false alarm density, attempting to maintain a constant false alarm rate. This paper will describe this fusion methodology and the adaptive threshold method in detail, show the target and clutter probability density functions for each class, and show result form recent field test. Fused results will be compared with single sensor performance, and strengths and weaknesses of each sensor will be discussed.

  20. Calibrating a novel multi-sensor physical activity measurement system

    PubMed Central

    John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S

    2011-01-01

    Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper, describes a novel multi-sensor ‘Integrated PA Measurement System’ (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors vs. outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance feasibility of free-living use are proposed and refinement of the prediction techniques is discussed. PMID:21813941

  1. Critical dimension control for 32 nm random contact hole array with resist reflow process

    NASA Astrophysics Data System (ADS)

    Park, Joon-Min; Kang, Young-Min; Park, Seung-Wook; Hong, Joo-Yoo; Oh, Hye-Keun

    2007-10-01

    50 nm random contact hole array by resist reflow process (RRP) was studied to make 32 nm node device. Patterning of smaller contact hole array is harder than patterning the line and space. RRP has a lot of advantages, but RRP strongly depends on pattern array, pitch, and shape. Thus, we must have full knowledge for pattern dependency after RRP, and then we need to have optimum optical proximity corrected mask including RRP to compensate the pattern dependency in random array. To make optimum optical proximity and RRP corrected mask, we must have better understanding that how much resist flows and where the contact hole locations are after RRP. A simulation is made to correctly predict RRP result by including the RRP parameters such as viscosity, adhesion force, surface tension and location of the contact hole. As a result, we made uniform 50 nm contact hole patterns even for the random contact hole array and for different shaped contact hole array by optical proximity corrected RRP.

  2. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  3. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  4. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  5. Effects of process parameters on the molding quality of the micro-needle array

    NASA Astrophysics Data System (ADS)

    Qiu, Z. J.; Ma, Z.; Gao, S.

    2016-07-01

    Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.

  6. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  7. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  8. Assembly, integration, and verification (AIV) in ALMA: series processing of array elements

    NASA Astrophysics Data System (ADS)

    Lopez, Bernhard; Jager, Rieks; Whyborn, Nicholas D.; Knee, Lewis B. G.; McMullin, Joseph P.

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA will consist of at least 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. It is the responsibility of ALMA AIV to deliver the fully assembled, integrated, and verified antennas (array elements) to the telescope array. After an initial phase of infrastructure setup AIV activities began when the first ALMA antenna and subsystems became available in mid 2008. During the second semester of 2009 a project-wide effort was made to put in operation a first 3- antenna interferometer at the Array Operations Site (AOS). In 2010 the AIV focus was the transition from event-driven activities towards routine series production. Also, due to the ramp-up of operations activities, AIV underwent an organizational change from an autonomous department into a project within a strong matrix management structure. When the subsystem deliveries stabilized in early 2011, steady-state series processing could be achieved in an efficient and reliable manner. The challenge today is to maintain this production pace until completion towards the end of 2013. This paper describes the way ALMA AIV evolved successfully from the initial phase to the present steady-state of array element series processing. It elaborates on the different project phases and their relationships, presents processing statistics, illustrates the lessons learned and relevant best practices, and concludes with an outlook of the path towards completion.

  9. An eigenvector-based test for local stationarity applied to array processing.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2014-06-01

    In sonar array processing, a challenging problem is the estimation of the data covariance matrix in the presence of moving targets in the water column, since the time interval of data local stationarity is limited. This work describes an eigenvector-based method for proper data segmentation into intervals that exhibit local stationarity, providing data-driven higher bounds for the number of snapshots available for computation of time-varying sample covariance matrices. Application of the test is illustrated with simulated data in a horizontal array for the detection of a quiet source in the presence of a loud interferer.

  10. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  11. Adaptive and mobile ground sensor array.

    SciTech Connect

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer; Maish, Alexander B.

    2003-12-01

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.

  12. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only.

    PubMed

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F(2)/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  13. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    NASA Astrophysics Data System (ADS)

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-09-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer.

  14. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    PubMed Central

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  15. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    NASA Astrophysics Data System (ADS)

    Warburton, W. K.; Harris, J. T.; Friedrich, S.

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100-2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays - currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I-V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  16. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  17. Multiplexed optical operation of nanoelectromechanical systems (NEMS) arrays for sensing and signal-processing applications

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin

    2014-06-01

    NEMS are rapidly being developed for a variety of sensing applications as well as for exploring interesting regimes in fundamental physics. In most of these endeavors, operation of a NEMS device involves actuating the device harmonically around its fundamental resonance and detecting subsequent motion while the device interacts with its environment. Even though a single NEMS resonator is exceptionally sensitive, a typical application, such as sensing or signal processing, requires the detection of signals from many resonators distributed over the surface of a chip. Therefore, one of the key technological challenges in the field of NEMS is development of multiplexed measurement techniques to detect the motion of a large number of NEMS resonators simultaneously. In this work, we address the important and difficult problem of interfacing with a large number of NEMS devices and facilitating the use of such arrays in, for example, sensing and signal processing applications. We report a versatile, all-optical technique to excite and read-out a distributed NEMS array. The NEMS array is driven by a distributed, intensity-modulated, optical pump through the photothermal effect. The ensuing vibrational response of the array is multiplexed onto a single, probe beam as a high-frequency phase modulation. The phase modulation is optically down converted to a low-frequency, intensity modulation using an adaptive full -field interferometer, and subsequently is detected using a charge-coupled device (CCD) array. Rapid and single-step mechanical characterization of approximately 60 nominally identical, high-frequency resonators is demonstrated. The technique may enable sensitivity improvements over single NEMS resonators by averaging signals coming from a multitude of devices in the array. In addition, the diffraction-limited spatial resolution may allow for position-dependent read-out of NEMS sensor chips for sensing multiple analytes or spatially inhomogeneous forces.

  18. Hybrid Arrays for Chemical Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  19. A novel method using adaptive hidden semi-Markov model for multi-sensor monitoring equipment health prognosis

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Dong, Ming; Lv, Wenyuan; Geng, Xiuli; Li, Yupeng

    2015-12-01

    Health prognosis for equipment is considered as a key process of the condition-based maintenance strategy. This paper presents an integrated framework for multi-sensor equipment diagnosis and prognosis based on adaptive hidden semi-Markov model (AHSMM). Unlike hidden semi-Markov model (HSMM), the basic algorithms in an AHSMM are first modified in order for decreasing computation and space complexity. Then, the maximum likelihood linear regression transformations method is used to train the output and duration distributions to re-estimate all unknown parameters. The AHSMM is used to identify the hidden degradation state and obtain the transition probabilities among health states and durations. Finally, through the proposed hazard rate equations, one can predict the useful remaining life of equipment with multi-sensor information. Our main results are verified in real world applications: monitoring hydraulic pumps from Caterpillar Inc. The results show that the proposed methods are more effective for multi-sensor monitoring equipment health prognosis.

  20. Hollow polymer microneedle array fabricated by photolithography process combined with micromolding technique.

    PubMed

    Wang, Po-Chun; Wester, Brock A; Rajaraman, Swaminathan; Paik, Seung-Joon; Kim, Seong-Hyok; Allen, Mark G

    2009-01-01

    Transdermal drug delivery through microneedles is a minimally invasive procedure causing little or no pain, and is a potentially attractive alternative to intramuscular and subdermal drug delivery methods. This paper demonstrates the fabrication of a hollow microneedle array using a polymer-based process combining UV photolithography and replica molding techniques. The key characteristic of the proposed fabrication process is to define a hollow lumen for microfluidic access via photopatterning, allowing a batch process as well as high throughput. A hollow SU-8 microneedle array, consisting of 825mum tall and 400 mum wide microneedles with 15-25 mum tip diameters and 120 mum diameter hollow lumens was designed, fabricated and characterized. PMID:19964192

  1. Extension of DAMAS Phased Array Processing for Spatial Coherence Determination (DAMAS-C)

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M., Jr.

    2006-01-01

    The present study reports a new development of the DAMAS microphone phased array processing methodology that allows the determination and separation of coherent and incoherent noise source distributions. In 2004, a Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decoupled the array design and processing influence from the noise being measured, using a simple and robust algorithm. In 2005, three-dimensional applications of DAMAS were examined. DAMAS has been shown to render an unambiguous quantitative determination of acoustic source position and strength. However, an underlying premise of DAMAS, as well as that of classical array beamforming methodology, is that the noise regions under study are distributions of statistically independent sources. The present development, called DAMAS-C, extends the basic approach to include coherence definition between noise sources. The solutions incorporate cross-beamforming array measurements over the survey region. While the resulting inverse problem can be large and the iteration solution computationally demanding, it solves problems no other technique can approach. DAMAS-C is validated using noise source simulations and is applied to airframe flap noise test results.

  2. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  3. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    PubMed Central

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  4. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  5. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-07-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  6. Multi-sensor data fusion framework for CNC machining monitoring

    NASA Astrophysics Data System (ADS)

    Duro, João A.; Padget, Julian A.; Bowen, Chris R.; Kim, H. Alicia; Nassehi, Aydin

    2016-01-01

    Reliable machining monitoring systems are essential for lowering production time and manufacturing costs. Existing expensive monitoring systems focus on prevention/detection of tool malfunctions and provide information for process optimisation by force measurement. An alternative and cost-effective approach is monitoring acoustic emissions (AEs) from machining operations by acting as a robust proxy. The limitations of AEs include high sensitivity to sensor position and cutting parameters. In this paper, a novel multi-sensor data fusion framework is proposed to enable identification of the best sensor locations for monitoring cutting operations, identifying sensors that provide the best signal, and derivation of signals with an enhanced periodic component. Our experimental results reveal that by utilising the framework, and using only three sensors, signal interpretation improves substantially and the monitoring system reliability is enhanced for a wide range of machining parameters. The framework provides a route to overcoming the major limitations of AE based monitoring.

  7. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  8. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  9. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  10. Registering a non-rigid multi-sensor ensemble of images.

    PubMed

    Kim, Hwa Young; Orchard, Jeff

    2010-01-01

    The majority of image registration methods deal with registering only two images at a time. Recently, a clustering method that concurrently registers more than two multi-sensor images was proposed, dubbed ensemble clustering. In this paper, we apply the ensemble clustering method to a deformable registration scenario for the first time. Non-rigid deformation is implemented by a free-form deformation model based on B-splines with a regularization term. However, the increased degrees of freedom in the transformations caused the Newton-type optimization process to become ill-conditioned. This made the registration process unstable. We solved this problem by using the matrix approximation afforded by the singular value decomposition (SVD). Experiments show that the method is successfully applied to non-rigid multi-sensor ensembles and overall yields better registration results than methods that register only two images at a time.

  11. A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives

    PubMed Central

    Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai

    2013-01-01

    This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124

  12. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  13. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  14. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  15. Lightweight solar array blanket tooling, laser welding and cover process technology. Final Report

    SciTech Connect

    Dillard, P.A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  16. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  17. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    PubMed Central

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  18. Monitoring and evaluation of alcoholic fermentation processes using a chemocapacitor sensor array.

    PubMed

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-09-02

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument.

  19. A distributed general multi-sensor cardinalized probability hypothesis density (CPHD) filter for sensor networks

    NASA Astrophysics Data System (ADS)

    Datta Gupta, S.; Nannuru, S.; Coates, M.; Rabbat, M.

    2015-05-01

    We develop a distributed cardinalized probability hypothesis density (CPHD) filter that can be deployed in a sensor network to process the measurements of multiple sensors that make conditionally independent measurements. In contrast to the majority of the related work, which involves performing local filter updates and then exchanging data to fuse the local intensity functions and cardinality distributions, we strive to approximate the update step that a centralized multi-sensor CPHD filter would perform.

  20. Development of subminiature multi-sensor hot-wire probes

    NASA Technical Reports Server (NTRS)

    Westphal, Russell V.; Ligrani, Phillip M.; Lemos, Fred R.

    1988-01-01

    Limitations on the spatial resolution of multisensor hot wire probes have precluded accurate measurements of Reynolds stresses very near solid surfaces in wind tunnels and in many practical aerodynamic flows. The fabrication, calibration and qualification testing of very small single horizontal and X-array hot-wire probes which are intended to be used near solid boundaries in turbulent flows where length scales are particularly small, is described. Details of the sensor fabrication procedure are reported, along with information needed to successfully operate the probes. As compared with conventional probes, manufacture of the subminiature probes is more complex, requiring special equipment and careful handling. The subminiature probes tested were more fragile and shorter lived than conventional probes; they obeyed the same calibration laws but with slightly larger experimental uncertainty. In spite of these disadvantages, measurements of mean statistical quantities and spectra demonstrate the ability of the subminiature sensors to provide the measurements in the near wall region of turbulent boundary layers that are more accurate than conventional sized probes.

  1. Portable nuclear material detector and process

    DOEpatents

    Hofstetter, Kenneth J; Fulghum, Charles K; Harpring, Lawrence J; Huffman, Russell K; Varble, Donald L

    2008-04-01

    A portable, hand held, multi-sensor radiation detector is disclosed. The detection apparatus has a plurality of spaced sensor locations which are contained within a flexible housing. The detection apparatus, when suspended from an elevation, will readily assume a substantially straight, vertical orientation and may be used to monitor radiation levels from shipping containers. The flexible detection array can also assume a variety of other orientations to facilitate any unique container shapes or to conform to various physical requirements with respect to deployment of the detection array. The output of each sensor within the array is processed by at least one CPU which provides information in a usable form to a user interface. The user interface is used to provide the power requirements and operating instructions to the operational components within the detection array.

  2. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  3. An adaptive Hidden Markov model for activity recognition based on a wearable multi-sensor device.

    PubMed

    Li, Zhen; Wei, Zhiqiang; Yue, Yaofeng; Wang, Hao; Jia, Wenyan; Burke, Lora E; Baranowski, Thomas; Sun, Mingui

    2015-05-01

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based on multi-sensor data is presented. In order to utilize these data efficiently and overcome the big data problem, an offline adaptive-Hidden Markov Model (HMM) is proposed. A sensor selection scheme is implemented based on an improved Viterbi algorithm. A new method is proposed that incorporates personal experience into the HMM model as a priori information. Experiments are conducted using a personal wearable computer eButton consisting of multiple sensors. Our comparative study with the standard HMM and other alternative methods in processing the eButton data have shown that our method is more robust and efficient, providing a useful tool to evaluate human activity and lifestyle.

  4. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    PubMed

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-09-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter.

  5. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input

    PubMed Central

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  6. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    PubMed

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  7. Multisensor fusion for system identification

    NASA Astrophysics Data System (ADS)

    Sim, Sung-Han; Cho, Soojin; Park, Jong-Woong; Kim, Hyunjun

    2014-04-01

    System identification is a fundamental process for developing a numerical model of a physical structure. The system identification process typically involves in data acquisition; particularly in civil engineering applications accelerometers are preferred due to its cost-effectiveness, low noise, and installation convenience. Because the measured acceleration responses result in translational degrees of freedom (DOF) in the numerical model, moment-resisting structures such as beam and plate are not appropriately represented by the models. This study suggests a system identification process that considers both translational and rotational DOFs by using accelerometers and gyroscopes. The proposed approach suggests a systematic way of obtaining dynamic characteristics as well as flexibility matrix from two different measurements of acceleration and angular velocity. Numerical simulation and laboratory experiment are conducted to validate the efficacy of the proposed system identification process.

  8. Liquid-crystalline processing of highly oriented carbon nanotube arrays for thin-film transistors.

    PubMed

    Ko, Hyunhyub; Tsukruk, Vladimir V

    2006-07-01

    We introduce a simple solution-based method for the fabrication of highly oriented carbon nanotube (CNT) arrays to be used for thin-film transistors. We exploit the liquid-crystalline behavior of a CNT solution near the receding contact line during tilted-drop casting and produced long-range nematic-like ordering of carbon nanotube stripes caused by confined micropatterned geometry. We further demonstrate that the performance of thin-film transistors based on these densely packed and uniformly oriented CNT arrays is largely improved compared to random CNTs. This approach has great potential in low-cost, large-scale processing of high-performance electronic devices based on high-density oriented CNT films with record electrical characteristics such as high conductance, low resistivity, and high career mobility.

  9. Process development for automated solar cell and module production. Task 4: automated array assembly

    SciTech Connect

    Hagerty, J.J.

    1980-06-30

    The scope of work under this contract involves specifying a process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use. This process sequence is then critically analyzed from a technical and economic standpoint to determine the technological readiness of each process step for implementation. The process steps are ranked according to the degree of development effort required and according to their significance to the overall process. Under this contract the steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development. Economic analysis using the SAMICS system has been performed during these studies to assure that development efforts have been directed towards the ultimate goal of price reduction. Details are given. (WHK)

  10. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    NASA Astrophysics Data System (ADS)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  11. An Asynchronous Multi-Sensor Micro Control Unit for Wireless Body Sensor Networks (WBSNs)

    PubMed Central

    Chen, Chiung-An; Chen, Shih-Lun; Huang, Hong-Yi; Luo, Ching-Hsing

    2011-01-01

    In this work, an asynchronous multi-sensor micro control unit (MCU) core is proposed for wireless body sensor networks (WBSNs). It consists of asynchronous interfaces, a power management unit, a multi-sensor controller, a data encoder (DE), and an error correct coder (ECC). To improve the system performance and expansion abilities, the asynchronous interface is created for handshaking different clock domains between ADC and RF with MCU. To increase the use time of the WBSN system, a power management technique is developed for reducing power consumption. In addition, the multi-sensor controller is designed for detecting various biomedical signals. To prevent loss error from wireless transmission, use of an error correct coding technique is important in biomedical applications. The data encoder is added for lossless compression of various biomedical signals with a compression ratio of almost three. This design is successfully tested on a FPGA board. The VLSI architecture of this work contains 2.68-K gate counts and consumes power 496-μW at 133-MHz processing rate by using TSMC 0.13-μm CMOS process. Compared with the previous techniques, this work offers higher performance, more functions, and lower hardware cost than other micro controller designs. PMID:22164000

  12. Modular multisensor ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2002-11-01

    Modern conflicts are highly dependant on the information flow and the ability not only to get the raw information, but also to process it and to deliver intelligence. This is specially obvious in the domain of image intelligence, and "signal" intelligence. The sensors are there, on different kind of platform, with different technologies, and each platform and each sensor has its own physical and operational characteristics. According to the high number of available sensors and platforms, and then to the number of possible configurations, there is an increasing requirement for the processing of all these data to deliver the wise intelligent report as soon as possible. Technology evolution and budget restrictions make the gap between civilian and military sensor systems smaller. Then the data processing has to take in account this aspect.

  13. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  14. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  15. Distinctive Order Based Self-Similarity descriptor for multi-sensor remote sensing image matching

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Ebadi, Hamid

    2015-10-01

    Robust, well-distributed and accurate feature matching in multi-sensor remote sensing image is a difficult task duo to significant geometric and illumination differences. In this paper, a robust and effective image matching approach is presented for multi-sensor remote sensing images. The proposed approach consists of three main steps. In the first step, UR-SIFT (Uniform robust scale invariant feature transform) algorithm is applied for uniform and dense local feature extraction. In the second step, a novel descriptor namely Distinctive Order Based Self Similarity descriptor, DOBSS descriptor, is computed for each extracted feature. Finally, a cross matching process followed by a consistency check in the projective transformation model is performed for feature correspondence and mismatch elimination. The proposed method was successfully applied for matching various multi-sensor satellite images as: ETM+, SPOT 4, SPOT 5, ASTER, IRS, SPOT 6, QuickBird, GeoEye and Worldview sensors, and the results demonstrate its robustness and capability compared to common image matching techniques such as SIFT, PIIFD, GLOH, LIOP and LSS.

  16. Scalable processing and capacity of Si microwire array anodes for Li ion batteries

    PubMed Central

    2014-01-01

    Si microwire array anodes have been prepared by an economical, microelectronics compatible method based on macropore etching. In the present report, evidence of the scalability of the process and the areal capacity of the anodes is presented. The anodes exhibit record areal capacities for Si-based anodes. The gravimetric capacity of longer anodes is comparable to the one of shorter anodes at moderate lithiation/delithiation rates. The diffusion limitation of the lithium ions through the electrolyte in depth among the wires is the limiting factor for cycling longer wires at high rates. PACS 82.47.Aa; 82.45.Vp; 81.16.-c PMID:25177226

  17. Scalable processing and capacity of Si microwire array anodes for Li ion batteries

    NASA Astrophysics Data System (ADS)

    Quiroga-González, Enrique; Carstensen, Jürgen; Föll, Helmut

    2014-08-01

    Si microwire array anodes have been prepared by an economical, microelectronics compatible method based on macropore etching. In the present report, evidence of the scalability of the process and the areal capacity of the anodes is presented. The anodes exhibit record areal capacities for Si-based anodes. The gravimetric capacity of longer anodes is comparable to the one of shorter anodes at moderate lithiation/delithiation rates. The diffusion limitation of the lithium ions through the electrolyte in depth among the wires is the limiting factor for cycling longer wires at high rates.

  18. Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.

  19. Evaluation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Array

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula

    2006-01-01

    The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.

  20. An Evaluation of Signal Processing Tools for Improving Phased Array Ultrasonic Weld Inspection

    SciTech Connect

    Ramuhalli, Pradeep; Cinson, Anthony D.; Crawford, Susan L.; Harris, Robert V.; Diaz, Aaron A.; Anderson, Michael T.

    2011-03-24

    Cast austenitic stainless steel (CASS) commonly used in U.S. nuclear power plants is a coarse-grained, elastically anisotropic material. The coarse-grained nature of CASS makes ultrasonic inspection of in-service components difficult. Recently, low-frequency phased array ultrasound has emerged as a candidate for the CASS piping weld inspection. However, issues such as low signal-to-noise ratio and difficulty in discriminating between flaw and non-flaw signals remain. This paper discusses the evaluation of a number of signal processing algorithms for improving flaw detection in CASS materials. The full paper provides details of the algorithms being evaluated, along with preliminary results.

  1. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    PubMed Central

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms. PMID:22016625

  2. The Role of Water Vapor and Dissociative Recombination Processes in Solar Array Arc Initiation

    NASA Technical Reports Server (NTRS)

    Galofar, J.; Vayner, B.; Degroot, W.; Ferguson, D.

    2002-01-01

    Experimental plasma arc investigations involving the onset of arc initiation for a negatively biased solar array immersed in low-density plasma have been performed. Previous studies into the arc initiation process have shown that the most probable arcing sites tend to occur at the triple junction involving the conductor, dielectric and plasma. More recently our own experiments have led us to believe that water vapor is the main causal factor behind the arc initiation process. Assuming the main component of the expelled plasma cloud by weight is water, the fastest process available is dissociative recombination (H2O(+) + e(-) (goes to) H* + OH*). A model that agrees with the observed dependency of arc current pulse width on the square root of capacitance is presented. A 400 MHz digital storage scope and current probe was used to detect arcs at the triple junction of a solar array. Simultaneous measurements of the arc trigger pulse, the gate pulse, the arc current and the arc voltage were then obtained. Finally, a large number of measurements of individual arc spectra were obtained in very short time intervals, ranging from 10 to 30 microseconds, using a 1/4 a spectrometer coupled with a gated intensified CCD. The spectrometer was systematically tuned to obtain optical arc spectra over the entire wavelength range of 260 to 680 nanometers. All relevant atomic lines and molecular bands were then identified.

  3. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  4. Fabrication of microlens arrays on a glass substrate by roll-to-roll process with PDMS mold

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Su, Guo-Dung J.

    2009-08-01

    This paper presents a roll-to-roll method to fabricate microlens arrays on a glass substrate by using a cost-effective PDMS (Polydimethylsiloxane) mold. We fabricated microlens arrays mold, which was made by photoresist(AZ4620), on the silicon substrate by thermal reflow process, and transferred the pattern to PDMS film. Roll-to-roll system is a standard printing process whose roller is made of acrylic cylinder surrounded with the PDMS mold. UV resin was chosen to be the material to make microlens in rolling process with UV light curing. We investigated the quality of microlens arrays by changing the parameters, such as embossing pressure and rolling speed, to ensure good quality of microlens arrays.

  5. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  6. Phase velocity tomography of surface waves using ambient noise cross correlation and array processing

    NASA Astrophysics Data System (ADS)

    Boué, Pierre; Roux, Philippe; Campillo, Michel; Briand, Xavier

    2014-01-01

    Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross correlations to extract a particular wave component of the reconstructed signals. We focus here on the extraction of the surface waves to measure phase velocity variations with great accuracy. DBF acts as a spatial filter between two distant subarrays after cross correlation of the wavefield between each single receiver pair. During the DBF process, horizontal slowness and azimuth are used to select the wavefront on both subarray sides. DBF increases the signal-to-noise ratio, which improves the extraction of the dispersive wave packets. This combination of cross correlation and DBF is used on the Transportable Array (USArray), for the central U.S. region. A standard model of surface wave propagation is constructed from a combination of the DBF and cross correlations at different offsets and for different frequency bands. The perturbation (phase shift) between each beam and the standard model is inverted. High-resolution maps of the phase velocity of Rayleigh and Love waves are then constructed. Finally, the addition of azimuthal information provided by DBF is discussed, to construct curved rays that replace the classical great-circle path assumption.

  7. Coupled process of plastics pyrolysis and chemical vapor deposition for controllable synthesis of vertically aligned carbon nanotube arrays

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhang, Qiang; Luo, Guohua; Huang, Jia-Qi; Zhao, Meng-Qiang; Wei, Fei

    2010-08-01

    Efficient conversion of waste plastics into advanced materials is of conspicuous environmental, social and economic benefits. A coupled process of plastic pyrolysis and chemical vapor deposition for vertically aligned carbon nanotube (CNT) array growth was proposed. Various kinds of plastics, such as polypropylene, polyethylene, and polyvinyl chloride, were used as carbon sources for the controllable growth of CNT arrays. The relationship between the length of CNT arrays and the growth time was investigated. It was found that the length of aligned CNTs increased with prolonged growth time. CNT arrays with a length of 500 μm were obtained for a 40-min growth and the average growth rate was estimated to be 12 μm/min. The diameter of CNTs in the arrays can be modulated by controlling the growth temperature and the feeding rate of ferrocene. In addition, substrates with larger specific surface area such as ceramic spheres, quartz fibers, and quartz particles, were adopted to support the growth of CNT arrays. Those results provide strong evidence for the feasibility of conversion from waste plastics into CNT arrays via this reported sustainable materials processing.

  8. Correlation of lattice defects and thermal processing in the crystallization of titania nanotube arrays

    NASA Astrophysics Data System (ADS)

    Hosseinpour, Pegah M.; Yung, Daniel; Panaitescu, Eugen; Heiman, Don; Menon, Latika; Budil, David; Lewis, Laura H.

    2014-12-01

    Titania nanotubes have the potential to be employed in a wide range of energy-related applications such as solar energy-harvesting devices and hydrogen production. As the functionality of titania nanostructures is critically affected by their morphology and crystallinity, it is necessary to understand and control these factors in order to engineer useful materials for green applications. In this study, electrochemically-synthesized titania nanotube arrays were thermally processed in inert and reducing environments to isolate the role of post-synthesis processing conditions on the crystallization behavior, electronic structure and morphology development in titania nanotubes, correlated with the nanotube functionality. Structural and calorimetric studies revealed that as-synthesized amorphous nanotubes crystallize to form the anatase structure in a three-stage process that is facilitated by the creation of structural defects. It is concluded that processing in a reducing gas atmosphere versus in an inert environment provides a larger unit cell volume and a higher concentration of Ti3+ associated with oxygen vacancies, thereby reducing the activation energy of crystallization. Further, post-synthesis annealing in either reducing or inert atmospheres produces pronounced morphological changes, confirming that the nanotube arrays thermally transform into a porous morphology consisting of a fragmented tubular architecture surrounded by a network of connected nanoparticles. This study links explicit data concerning morphology, crystallization and defects, and shows that the annealing gas environment determines the details of the crystal structure, the electronic structure and the morphology of titania nanotubes. These factors, in turn, impact the charge transport and consequently the functionality of these nanotubes as photocatalysts.

  9. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  10. Sensor evaluation study for use with towed arrays for UXO site characterization

    SciTech Connect

    McDonald, J.R.; Robertson, R.

    1996-11-01

    The Naval Research Laboratory is developing a Multi-sensor Towed Array Detection System (MTADS) with support from the DOD Environmental Security Technology Certification Program (ESTCP). In this effort we seek to extend and refine ordnance detection technology to more efficiently characterize OEW sites, identifying nonferrous and smaller items, distinguishing ordnance from clutter and analyzing clustered targets to identify and locate individual targets within complex target fields. Our evaluation shows that these goals are best met by combining magnetic and electromagnetic sensors. We report on field studies at a prepared test range of commercial sensors in arrays in various configurations and including; Cesium vapor magnetometers in single sensor and gradiometric configurations, fluxgate gradiometers, proton procession magnetometers, and electromagnetic pulsed induction sensors. The advantages and disadvantages of each technology and their applicability based upon survey requirements is discussed. We also discuss recommended data densities including horizontal sensor spacings, survey speeds, sensor heights and make recommendations about the appropriate use of gradiometers and active sensors.

  11. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    PubMed Central

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-01-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems. PMID:19045902

  12. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing

  13. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  14. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  15. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  16. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  17. Integrated multi-sensor package (IMSP) for unmanned vehicle operations

    NASA Astrophysics Data System (ADS)

    Crow, Eddie C.; Reichard, Karl; Rogan, Chris; Callen, Jeff; Seifert, Elwood

    2007-10-01

    This paper describes recent efforts to develop integrated multi-sensor payloads for small robotic platforms for improved operator situational awareness and ultimately for greater robot autonomy. The focus is on enhancements to perception through integration of electro-optic, acoustic, and other sensors for navigation and inspection. The goals are to provide easier control and operation of the robot through fusion of multiple sensor outputs, to improve interoperability of the sensor payload package across multiple platforms through the use of open standards and architectures, and to reduce integration costs by embedded sensor data processing and fusion within the sensor payload package. The solutions investigated in this project to be discussed include: improved capture, processing and display of sensor data from multiple, non-commensurate sensors; an extensible architecture to support plug and play of integrated sensor packages; built-in health, power and system status monitoring using embedded diagnostics/prognostics; sensor payload integration into standard product forms for optimized size, weight and power; and the use of the open Joint Architecture for Unmanned Systems (JAUS)/ Society of Automotive Engineers (SAE) AS-4 interoperability standard. This project is in its first of three years. This paper will discuss the applicability of each of the solutions in terms of its projected impact to reducing operational time for the robot and teleoperator.

  18. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing

    PubMed Central

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-01-01

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as “multiple-missed detection”, and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis. PMID:26295232

  19. A Field-Programmable Analog Array Development Platform for Vestibular Prosthesis Signal Processing

    PubMed Central

    Töreyin, Hakan; Bhatti, Pamela

    2015-01-01

    We report on a vestibular prosthesis signal processor realized using an experimental field programmable analog array (FPAA). Completing signal processing functions in the analog domain, the processor is designed to help replace a malfunctioning inner ear sensory organ, a semicircular canal. Relying on angular head motion detected by an inertial sensor, the signal processor maps angular velocity into meaningful control signals to drive a current stimulator. To demonstrate biphasic pulse control a 1 kΩ resistive load was placed across an H-bridge circuit. When connected to a 2.4 V supply, a biphasic current of 100 μA was maintained at stimulation frequencies from 50–350 Hz, pulsewidths from 25–400 μsec, and interphase gaps ranging from 25–250 sec. PMID:23853331

  20. Enhanced Processing for a Towed Array Using an Optimal Noise Canceling Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2005-07-21

    Noise self-generated by a surface ship towing an array in search of a weak target presents a major problem for the signal processing especially if broadband techniques are being employed. In this paper we discuss the development and application of an adaptive noise canceling processor capable of extracting the weak far-field acoustic target in a noisy ocean acoustic environment. The fundamental idea for this processor is to use a model-based approach incorporating both target and ship noise. Here we briefly describe the underlying theory and then demonstrate through simulation how effective the canceller and target enhancer perform. The adaptivity of the processor not only enables the ''tracking'' of the canceller coefficients, but also the estimation of target parameters for localization. This approach which is termed ''joint'' cancellation and enhancement produces the optimal estimate of both in a minimum (error) variance sense.

  1. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  2. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    NASA Astrophysics Data System (ADS)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  3. Adaptive multi-sensor integration for mine detection

    SciTech Connect

    Baker, J.E.

    1997-05-01

    State-of-the-art in multi-sensor integration (MSI) application involves extensive research and development time to understand and characterize the application domain; to determine and define the appropriate sensor suite; to analyze, characterize, and calibrate the individual sensor systems; to recognize and accommodate the various sensor interactions; and to develop and optimize robust merging code. Much of this process can benefit from adaptive learning, i.e., an output-based system can take raw sensor data and desired merged results as input and adaptively develop/determine an effective method if interpretation and merger. This approach significantly reduces the time required to apply MSI to a given application, while increasing the quality of the final result and provides a quantitative measure for comparing competing MSI techniques and sensor suites. The ability to automatically develop and optimize MSI techniques for new sensor suites and operating environments makes this approach well suited to the detection of mines and mine-like targets. Perhaps more than any other, this application domain is characterized by diverse, innovative, and dynamic sensor suites, whose nature and interactions are not yet well established. This paper presents such an outcome-based multi-image analysis system. An empirical evaluation of its performance and its application, sensor and domain robustness is presented.

  4. Multisensor based robotic manipulation in an uncalibrated manufacturing workcell

    SciTech Connect

    Ghosh, B.K.; Xiao, Di; Xi, Ning; Tarn, Tzyh-Jong

    1997-12-31

    The main problem that we address in this paper is how a robot manipulator is able to track and grasp a part placed arbitrarily on a moving disc conveyor aided by a single CCD camera and fusing information from encoders placed on the conveyor and also from encoders on the robot manipulator. The important assumption that distinguishes our work from what has been previously reported in the literature is that the position and orientation of the camera and the base frame of the robot is apriori assumed to be unknown and is `visually calibrated` during the operation of the manipulator. Moreover the part placed on the conveyor is assumed to be non-planar, i.e. the feature points observed on the part is assumed to be located arbitrarily in IR{sup 3}. The novelties of the proposed approach in this paper includes a (i) multisensor fusion scheme based on complementary data for the purpose of part localization, and (ii) self-calibration between the turntable and the robot manipulator using visual data and feature points on the end-effector. The principle advantages of the proposed scheme are the following. (i) It renders possible to reconfigure a manufacturing workcell without recalibrating the relation between the turntable and the robot. This significantly shortens the setup time of the workcell. (ii) It greatly weakens the requirement on the image processing speed.

  5. Investigation on fabrication process of dissolving microneedle arrays to improve effective needle drug distribution.

    PubMed

    Wang, Qingqing; Yao, Gangtao; Dong, Pin; Gong, Zihua; Li, Ge; Zhang, Kejian; Wu, Chuanbin

    2015-01-23

    The dissolving microneedle array (DMNA) offers a novel potential approach for transdermal delivery of biological macromolecular drugs and vaccines, because it can be as efficient as hypodermic injection and as safe and patient compliant as conventional transdermal delivery. However, effective needle drug distribution is the main challenge for clinical application of DMNA. This study focused on the mechanism and control of drug diffusion inside DMNA during the fabrication process in order to improve the drug delivery efficiency. The needle drug loading proportion (NDP) in DMNAs was measured to determine the influences of drug concentration gradient, needle drying step, excipients, and solvent of the base solution on drug diffusion and distribution. The results showed that the evaporation of base solvent was the key factor determining NDP. Slow evaporation of water from the base led to gradual increase of viscosity, and an approximate drug concentration equilibrium was built between the needle and base portions, resulting in NDP as low as about 6%. When highly volatile ethanol was used as the base solvent, the viscosity in the base rose quickly, resulting in NDP more than 90%. Ethanol as base solvent did not impact the insertion capability of DMNAs, but greatly increased the in vitro drug release and transdermal delivery from DMNAs. Furthermore, the drug diffusion process during DMNA fabrication was thoroughly investigated for the first time, and the outcomes can be applied to most two-step molding processes and optimization of the DMNA fabrication. PMID:25446513

  6. Approach to multisensor/multilook information fusion

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.; Patton, Ronald

    1997-07-01

    We are developing a multi-sensor, multi-look Artificial Intelligence Enhanced Information Processor (AIEIP) that combines classification elements of geometric hashing, neural networks and evolutionary algorithms in a synergistic combination. The fusion is coordinated using a piecewise level fusion algorithm that operates on probability data from statistics of the individual classifiers. Further, the AIEIP incorporates a knowledge-based system to aid a user in evaluating target data dynamically. The AIEIP is intended as a semi-autonomous system that not only fuses information from electronic data sources, but also has the capability to include human input derived from battlefield awareness and intelligence sources. The system would be useful in either advanced reconnaissance information fusion tasks where multiple fixed sensors and human observer inputs must be combined or for a dynamic fusion scenario incorporating an unmanned vehicle swarm with dynamic, multiple sensor data inputs. This paper represents our initial results from experiments and data analysis using the individual components of the AIEIP on FLIR target sets of ground vehicles.

  7. Multi-Sensor Aerosol Products Sampling System

    NASA Technical Reports Server (NTRS)

    Petrenko, M.; Ichoku, C.; Leptoukh, G.

    2011-01-01

    Global and local properties of atmospheric aerosols have been extensively observed and measured using both spaceborne and ground-based instruments, especially during the last decade. Unique properties retrieved by the different instruments contribute to an unprecedented availability of the most complete set of complimentary aerosol measurements ever acquired. However, some of these measurements remain underutilized, largely due to the complexities involved in analyzing them synergistically. To characterize the inconsistencies and bridge the gap that exists between the sensors, we have established a Multi-sensor Aerosol Products Sampling System (MAPSS), which consistently samples and generates the spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of aerosol products from multiple spacebome sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS. Samples of satellite aerosol products are extracted over Aerosol Robotic Network (AERONET) locations as well as over other locations of interest such as those with available ground-based aerosol observations. In this way, MAPSS enables a direct cross-characterization and data integration between Level-2 aerosol observations from multiple sensors. In addition, the available well-characterized co-located ground-based data provides the basis for the integrated validation of these products. This paper explains the sampling methodology and concepts used in MAPSS, and demonstrates specific examples of using MAPSS for an integrated analysis of multiple aerosol products.

  8. Multisensor data fusion for automated guided vehicles

    NASA Astrophysics Data System (ADS)

    Mahmoud, Rachad; Loffeld, Otmar; Hartmann, Klaus

    1994-11-01

    The paper addresses multi-sensor data fusion for the navigation of a 4 wheel vehicle with two driven wheels. The main advantage of such a configuration is its flexibility concerning free motion and navigation, this advantage is paid for, however, with an increased complexity concerning the dynamic model of the vehicle. The basic sensors of the vehicle comprise a fiber optical gyro, continuously delivering angular orientation information (namely the angular velocity), a landmark sensor, delivering global position information at those instants where a landmark is available and within the reach of the sensor. Eventually an undriven and therefore not subject to slippage measuring wheel can be added. The control inputs to the vehicle are taken to be noisy but nominally known but subject to errors due to measuring errors and unknown influences. The approach taken in the paper essentially uses Kalman filtering ideas namely extended Kalman filtering to implement multi model filtering. The Kalman filter incorporates the different noisy measurements in order to `fuse' them to one precise position and orientation estimate, copes with the only temporary available global information, and automatically realizes dead-reckoning where no global information is available. The paper covers the state space formulation of the problem, discuses the different models needed to describe the different motions. Based on a realistic state space model the corresponding Kalman filter is designed and tested with simulated measurement data delivered by a truth model simulator.

  9. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    NASA Astrophysics Data System (ADS)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  10. Fabrication of dense non-circular nanomagnetic device arrays using self-limiting low-energy glow-discharge processing.

    PubMed

    Zheng, Zhen; Chang, Long; Nekrashevich, Ivan; Ruchhoeft, Paul; Khizroev, Sakhrat; Litvinov, Dmitri

    2013-01-01

    We describe a low-energy glow-discharge process using reactive ion etching system that enables non-circular device patterns, such as squares or hexagons, to be formed from a precursor array of uniform circular openings in polymethyl methacrylate, PMMA, defined by electron beam lithography. This technique is of a particular interest for bit-patterned magnetic recording medium fabrication, where close packed square magnetic bits may improve its recording performance. The process and results of generating close packed square patterns by self-limiting low-energy glow-discharge are investigated. Dense magnetic arrays formed by electrochemical deposition of nickel over self-limiting formed molds are demonstrated.

  11. Weighted measurement fusion Kalman estimator for multisensor descriptor system

    NASA Astrophysics Data System (ADS)

    Dou, Yinfeng; Ran, Chenjian; Gao, Yuan

    2016-08-01

    For the multisensor linear stochastic descriptor system with correlated measurement noises, the fused measurement can be obtained based on the weighted least square (WLS) method, and the reduced-order state components are obtained applying singular value decomposition method. Then, the multisensor descriptor system is transformed to a fused reduced-order non-descriptor system with correlated noise. And the weighted measurement fusion (WMF) Kalman estimator of this reduced-order subsystem is presented. According to the relationship of the presented non-descriptor system and the original descriptor system, the WMF Kalman estimator and its estimation error variance matrix of the original multisensor descriptor system are presented. The presented WMF Kalman estimator has global optimality, and can avoid computing these cross-variances of the local Kalman estimator, compared with the state fusion method. A simulation example about three-sensors stochastic dynamic input and output systems in economy verifies the effectiveness.

  12. Investigation of Proposed Process Sequence for the Array Automated Assembly Task, Phase 2. [low cost silicon solar array fabrication

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Bunyan, S.; Pepe, A.

    1979-01-01

    The technological readiness of the proposed process sequence was reviewed. Process steps evaluated include: (1) plasma etching to establish a standard surface; (2) forming junctions by diffusion from an N-type polymeric spray-on source; (3) forming a p+ back contact by firing a screen printed aluminum paste; (4) forming screen printed front contacts after cleaning the back aluminum and removing the diffusion oxide; (5) cleaning the junction by a laser scribe operation; (6) forming an antireflection coating by baking a polymeric spray-on film; (7) ultrasonically tin padding the cells; and (8) assembling cell strings into solar circuits using ethylene vinyl acetate as an encapsulant and laminating medium.

  13. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  14. Biologically inspired large scale chemical sensor arrays and embedded data processing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  15. Impedimetric real-time monitoring of neural pluripotent stem cell differentiation process on microelectrode arrays.

    PubMed

    Seidel, Diana; Obendorf, Janine; Englich, Beate; Jahnke, Heinz-Georg; Semkova, Vesselina; Haupt, Simone; Girard, Mathilde; Peschanski, Marc; Brüstle, Oliver; Robitzki, Andrea A

    2016-12-15

    In today's neurodevelopment and -disease research, human neural stem/progenitor cell-derived networks represent the sole accessible in vitro model possessing a primary phenotype. However, cultivation and moreover, differentiation as well as maturation of human neural stem/progenitor cells are very complex and time-consuming processes. Therefore, techniques for the sensitive non-invasive, real-time monitoring of neuronal differentiation and maturation are highly demanded. Using impedance spectroscopy, the differentiation of several human neural stem/progenitor cell lines was analyzed in detail. After development of an optimum microelectrode array for reliable and sensitive long-term monitoring, distinct cell-dependent impedimetric parameters that could specifically be associated with the progress and quality of neuronal differentiation were identified. Cellular impedance changes correlated well with the temporal regulation of biomolecular progenitor versus mature neural marker expression as well as cellular structure changes accompanying neuronal differentiation. More strikingly, the capability of the impedimetric differentiation monitoring system for the use as a screening tool was demonstrated by applying compounds that are known to promote neuronal differentiation such as the γ-secretase inhibitor DAPT. The non-invasive impedance spectroscopy-based measurement system can be used for sensitive and quantitative monitoring of neuronal differentiation processes. Therefore, this technique could be a very useful tool for quality control of neuronal differentiation and moreover, for neurogenic compound identification and industrial high-content screening demands in the field of safety assessment as well as drug development.

  16. Reduction of mine suspected areas by multisensor airborne measurements: first results

    NASA Astrophysics Data System (ADS)

    Keller, Martin; Milisavljevic, Nada; Suess, Helmut; Acheroy, Marc P. J.

    2002-08-01

    Humanitarian demining is a very dangerous, cost and time intensive work, where a lot of effort is usually wasted in inspecting suspected areas that turn out to be mine-free. The main goal of the project SMART (Space and airborne Mined Area Reduction Tools) is to apply a multisensor approach towards corresponding signature data collection, developing adapted data understanding and data processing tools for improving the efficiency and reliability of level 1 minefield surveys by reducing suspected mined areas. As a result, the time for releasing mine-free areas for civilian use should be shortened. In this paper, multisensor signature data collected at four mine suspected areas in different parts of Croatia are presented, their information content is discussed, and first results are described. The multisensor system consists of a multifrequency multipolarization SAR system (DLR Experimental Synthetic Aperture Radar E-SAR), an optical scanner (Daedalus) and a camera (RMK) for color infrared aerial views. E-SAR data were acquired in X-, C-, L- and P- bands, the latter two being fully polarimetric interferometric. This provides pieces of independent information, ranging from high spatial resolution (X-band) to very good penetration abilities (P-band), together with possibilities for polarimetric and interferometric analysis. The Daedalus scanner, with 12 channels between visible and thermal infrared, has a very high spatial resolution. For each of the sensors, the applied processing, geocoding and registration is described. The information content is analyzed in sense of the capability and reliability in describing conditions inside suspected mined areas, as a first step towards identifying their mine-free parts, with special emphasis set on polarimetric and interferometric information.

  17. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  18. Planarized process for resonant leaky-wave coupled phase-locked arrays of mid-IR quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Chang, C.-C.; Kirch, J. D.; Boyle, C.; Sigler, C.; Mawst, L. J.; Botez, D.; Zutter, B.; Buelow, P.; Schulte, K.; Kuech, T.; Earles, T.

    2015-03-01

    On-chip resonant leaky-wave coupling of quantum cascade lasers (QCLs) emitting at 8.36 μm has been realized by selective regrowth of interelement layers in curved trenches, defined by dry and wet etching. The fabricated structure provides large index steps (Δn = 0.10) between antiguided-array element and interelement regions. In-phase-mode operation to 5.5 W front-facet emitted power in a near-diffraction-limited far-field beam pattern, with 4.5 W in the main lobe, is demonstrated. A refined fabrication process has been developed to produce phased-locked antiguided arrays of QCLs with planar geometry. The main fabrication steps in this process include non-selective regrowth of Fe:InP in interelement trenches, defined by inductive-coupled plasma (ICP) etching, a chemical polishing (CP) step to planarize the surface, non-selective regrowth of interelement layers, ICP selective etching of interelement layers, and non-selective regrowth of InP cladding layer followed by another CP step to form the element regions. This new process results in planar InGaAs/InP interelement regions, which allows for significantly improved control over the array geometry and the dimensions of element and interelement regions. Such a planar process is highly desirable to realize shorter emitting wavelength (4.6 μm) arrays, where fabrication tolerance for single-mode operation are tighter compared to 8 μm-emitting devices.

  19. Medical ultrasound digital beamforming on a massively parallel processing array platform

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad

    2008-03-01

    Digital beamforming has been widely used in modern medical ultrasound instruments. Flexibility is the key advantage of a digital beamformer over the traditional analog approach. Unlike analog delay lines, digital delay can be programmed to implement new ways of beam shaping and beam steering without hardware modification. Digital beamformers can also be focused dynamically by tracking the depth and focusing the receive beam as the depth increases. By constantly updating an element weight table, a digital beamformer can dynamically increase aperture size with depth to maintain constant lateral resolution and reduce sidelobe noise. Because ultrasound digital beamformers have high I/O bandwidth and processing requirements, traditionally they have been implemented using ASICs or FPGAs that are costly both in time and in money. This paper introduces a sample implementation of a digital beamformer that is programmed in software on a Massively Parallel Processor Array (MPPA). The system consists of a host PC and a PCI Express-based beamformer accelerator with an Ambric Am2045 MPPA chip and 512 Mbytes of external memory. The Am2045 has 336 asynchronous RISCDSP processors that communicate through a configurable structure of channels, using a self-synchronizing communication protocol.

  20. Micro-processing of Hybrid Field-Effect Transistor Arrays using Picosecond Lasers

    NASA Astrophysics Data System (ADS)

    Ireland, Robert; Liu, Yu; Spalenka, Josef; Jaiswal, Supriya; Oishi, Shingo; Fukumitsu, Kenshi; Ryosuke, Mochizuki; Gopalan, Padma; Evans, Paul; Katz, Howard

    2014-03-01

    We use a solid-state picosecond laser to pattern thin-film semiconductors that completely cover a substrate and utilize an array of top-contact electrodes, particularly for materials with high chemical sensitivity or resistance. Picosecond laser processing is fully data-driven, both thermally and mechanically non-invasive, and exploits highly localized non-linear optical effects. We investigate FETs comprised of p-channel tellurium and organic semiconductor molecules sequentially vapor-deposited onto Si/SiO2 substrates. Secondly, zinc oxide and zinc-tin oxide are used for high mobility n-channel FETs, cast onto Si/SiO2 by sol-gel method. Finally, zinc oxide FETs are prepared as photomodulatable devices using rhenium bipyridine as a light-sensitive electron-donating molecule. The laser effectively isolates FETs while charge carrier mobility is maintained, but leakage currents through the FET dielectric are drastically reduced, and other functions are enhanced. For instance, the ratio of measured gate current to photocurrent for photomodulatable FETs drops from a factor of five to zero after laser isolation, in both forward and reverse bias. We also observe a threshold voltage shift in organic semiconductors after laser isolation, possibly due to local charging effects.

  1. Solution-Processed Organic Thin-Film Transistor Array for Active-Matrix Organic Light-Emitting Diode

    NASA Astrophysics Data System (ADS)

    Harada, Chihiro; Hata, Takuya; Chuman, Takashi; Ishizuka, Shinichi; Yoshizawa, Atsushi

    2013-05-01

    We developed a 3-in. organic thin-film transistor (OTFT) array with an ink-jetted organic semiconductor. All layers except electrodes were fabricated by solution processes. The OTFT performed well without hysteresis, and the field-effect mobility in the saturation region was 0.45 cm2 V-1 s-1, the threshold voltage was 3.3 V, and the on/off current ratio was more than 106. We demonstrated a 3-in. active-matrix organic light-emitting diode (AMOLED) display driven by the OTFT array. The display could provide clear moving images. The peak luminance of the display was 170 cd/m2.

  2. Near real-time, on-the-move multisensor integration and computing framework

    NASA Astrophysics Data System (ADS)

    Burnette, Chris; Schneider, Matt; Agarwal, Sanjeev; Deterline, Diane; Geyer, Chris; Phan, Chung D.; Lydic, Richard M.; Green, Kevin; Swett, Bruce

    2015-05-01

    Implanted mines and improvised devices are a persistent threat to Warfighters. Current Army countermine missions for route clearance need on-the-move standoff detection to improve the rate of advance. Vehicle-based forward looking sensors such as electro-optical and infrared (EO/IR) devices can be used to identify potential threats in near real-time (NRT) at safe standoff distance to support route clearance missions. The MOVERS (Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System) is a vehicle-based multi-sensor integration and exploitation system that ingests and processes video and imagery data captured from forward-looking EO/IR and thermal sensors, and also generates target/feature alerts, using the Video Processing and Exploitation Framework (VPEF) "plug and play" video processing toolset. The MOVERS Framework provides an extensible, flexible, and scalable computing and multi-sensor integration GOTS framework that enables the capability to add more vehicles, sensors, processors or displays, and a service architecture that provides low-latency raw video and metadata streams as well as a command and control interface. Functionality in the framework is exposed through the MOVERS SDK which decouples the implementation of the service and client from the specific communication protocols.

  3. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Dey, Dibyendu; Memis, Omer G.; Katsnelson, Alex; Mohseni, Hooman

    2008-03-01

    Fabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP) silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  4. The multisensor PHD filter: II. Erroneous solution via Poisson magic

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald

    2009-05-01

    The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.

  5. Optimal multisensor decision fusion of mine detection algorithms

    NASA Astrophysics Data System (ADS)

    Liao, Yuwei; Nolte, Loren W.; Collins, Leslie M.

    2003-09-01

    Numerous detection algorithms, using various sensor modalities, have been developed for the detection of mines in cluttered and noisy backgrounds. The performance for each detection algorithm is typically reported in terms of the Receiver Operating Characteristic (ROC), which is a plot of the probability of detection versus false alarm as a function of the threshold setting on the output decision variable of each algorithm. In this paper we present multi-sensor decision fusion algorithms that combine the local decisions of existing detection algorithms for different sensors. This offers, in certain situations, an expedient, attractive and much simpler alternative to "starting over" with the redesign of a new algorithm which fuses multiple sensors at the data level. The goal in our multi-sensor decision fusion approach is to exploit complimentary strengths of existing multi-sensor algorithms so as to achieve performance (ROC) that exceeds the performance of any sensor algorithm operating in isolation. Our approach to multi-sensor decision fusion is based on optimal signal detection theory, using the likelihood ratio. We consider the optimal fusion of local decisions for two sensors, GPR (ground penetrating radar) and MD (metal detector). A new robust algorithm for decision fusion is presented that addresses the problem that the statistics of the training data is not likely to exactly match the statistics of the test data. ROC's are presented and compared for real data.

  6. Enhanced research in ground-penetrating radar and multisensor fusion with application to the detection and visualization of buried waste. Final report

    SciTech Connect

    Devney, A.J.; DiMarzio, C.; Kokar, M.; Miller, E.L.; Rappaport, C.M.; Weedon, W.H.

    1996-05-14

    Recognizing the difficulty and importance of the landfill remediation problems faced by DOE, and the fact that no one sensor alone can provide complete environmental site characterization, a multidisciplinary team approach was chosen for this project. The authors have developed a multisensor fusion approach that is suitable for the wide variety of sensors available to DOE, that allows separate detection algorithms to be developed and custom-tailored to each sensor. This approach is currently being applied to the Geonics EM-61 and Coleman step-frequency radar data. High-resolution array processing techniques were developed for detecting and localizing buried waste containers. A soil characterization laboratory facility was developed using a HP-8510 network analyzer and near-field coaxial probe. Both internal and external calibration procedures were developed for de-embedding the frequency-dependent soil electrical parameters from the measurements. Dispersive soil propagation modeling algorithms were also developed for simulating wave propagation in dispersive soil media. A study was performed on the application of infrared sensors to the landfill remediation problem, particularly for providing information on volatile organic compounds (VOC`s) in the atmosphere. A dust-emission lidar system is proposed for landfill remediation monitoring. Design specifications are outlined for a system which could be used to monitor dust emissions in a landfill remediation effort. The detailed results of the investigations are contained herein.

  7. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  8. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; Aires, F.

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  9. Developing Smart Seismic Arrays: A Simulation Environment, Observational Database, and Advanced Signal Processing

    SciTech Connect

    Harben, P E; Harris, D; Myers, S; Larsen, S; Wagoner, J; Trebes, J; Nelson, K

    2003-09-15

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3D finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic

  10. Site Specific Evaluation of Multisensor Capacitance Probes

    NASA Astrophysics Data System (ADS)

    Rowland, R. A.; Guber, A. K.; Pachepsky, Y.; Gish, T. J.; Daughtry, C. S.

    2007-12-01

    Multisensor capacitance probes (MCPs) are widely used for measuring soil water content (SWC) at the field scale. Although manufacturers supply a generic MCP calibration, many researchers recognize that MCPs should be calibrated for specific field conditions. MCPs measurements are typically associated with small soil volumes, and are subsequently scaled up to the plot or field scale. Research is needed to understand how representative are these measurements for water monitoring studies that operate with the elementary area from one to tens square meters. The objectives of this study were: (a) to test the accuracy of SWC field measurements using generic and laboratory MCP calibrations; (b) to test applicability of a single MCP calibration for SWC measurements at different depths; and (c) to compare the accuracy of two and three-parameter equations using scaled frequency (SF). Four 1x1 m plots were equipped with MCPs to measure SWC at 9 depths at the OPE3 USDA-ARS research site at Beltsville, MD. Within each plot, three undisturbed soil cores were taken with a 100 cm3 soil auger. SWC sampling was made on three different dates when soil water contents were distinctly different. To compare MCP measurements with observed SWC, the SF was converted into SWC using: (a) the manufacturer generic calibration; and (b) calibration obtained in laboratory for a mesic Aquic Hapludult soil. Parameters of three different calibration equations were also obtained by fitting the equations to the water contents measurements at the plots. This fit was done: (a) for all observations regardless the depth, (b) for observations at each genetic horizon, and (c) for each depth separately. Results show that the manufacturer and the laboratory calibrations provided a satisfactory fit to the field-measured SWC at depths of 30, 40 and 50 cm. The fit was about two times less accurate at depths of 10, 20, 60, 70 80 and 90 cm. A minor improvement was obtained at depths of 10 and 20 cm after

  11. RheoStim: Development of an Adaptive Multi-Sensor to Prevent Venous Stasis

    PubMed Central

    Weyer, Sören; Weishaupt, Fabio; Kleeberg, Christian; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    Chronic venous insufficiency of the lower limbs is often underestimated and, in the absence of therapy, results in increasingly severe complications, including therapy-resistant tissue defects. Therefore, early diagnosis and adequate therapy is of particular importance. External counter pulsation (ECP) therapy is a method used to assist the venous system. The main principle of ECP is to squeeze the inner leg vessels by muscle contractions, which are evoked by functional electrical stimulation. A new adaptive trigger method is proposed, which improves and supplements the current therapeutic options by means of pulse synchronous electro-stimulation of the muscle pump. For this purpose, blood flow is determined by multi-sensor plethysmography. The hardware design and signal processing of this novel multi-sensor plethysmography device are introduced. The merged signal is used to determine the phase of the cardiac cycle, to ensure stimulation of the muscle pump during the filling phase of the heart. The pulse detection of the system is validated against a gold standard and provides a sensitivity of 98% and a false-negative rate of 2% after physical exertion. Furthermore, flow enhancement of the system has been validated by duplex ultrasonography. The results show a highly increased blood flow in the popliteal vein at the knee. PMID:27023544

  12. RheoStim: Development of an Adaptive Multi-Sensor to Prevent Venous Stasis.

    PubMed

    Weyer, Sören; Weishaupt, Fabio; Kleeberg, Christian; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    Chronic venous insufficiency of the lower limbs is often underestimated and, in the absence of therapy, results in increasingly severe complications, including therapy-resistant tissue defects. Therefore, early diagnosis and adequate therapy is of particular importance. External counter pulsation (ECP) therapy is a method used to assist the venous system. The main principle of ECP is to squeeze the inner leg vessels by muscle contractions, which are evoked by functional electrical stimulation. A new adaptive trigger method is proposed, which improves and supplements the current therapeutic options by means of pulse synchronous electro-stimulation of the muscle pump. For this purpose, blood flow is determined by multi-sensor plethysmography. The hardware design and signal processing of this novel multi-sensor plethysmography device are introduced. The merged signal is used to determine the phase of the cardiac cycle, to ensure stimulation of the muscle pump during the filling phase of the heart. The pulse detection of the system is validated against a gold standard and provides a sensitivity of 98% and a false-negative rate of 2% after physical exertion. Furthermore, flow enhancement of the system has been validated by duplex ultrasonography. The results show a highly increased blood flow in the popliteal vein at the knee. PMID:27023544

  13. Recognition Time for Letters and Nonletters: Effects of Serial Position, Array Size, and Processing Order.

    ERIC Educational Resources Information Center

    Mason, Mildred

    1982-01-01

    Three experiments report additional evidence that it is a mistake to account for all interletter effects solely in terms of sensory variables. These experiments attest to the importance of structural variables such as retina location, array size, and ordinal position. (Author/PN)

  14. 2D Array of Far-infrared Thermal Detectors: Noise Measurements and Processing Issues

    NASA Technical Reports Server (NTRS)

    Lakew, B.; Aslam, S.; Stevenson, T.

    2008-01-01

    A magnesium diboride (MgB2) detector 2D array for use in future space-based spectrometers is being developed at GSFC. Expected pixel sensitivities and comparison to current state-of-the-art infrared (IR) detectors will be discussed.

  15. Multisensor monitoring of sea surface state of the coastal zone

    NASA Astrophysics Data System (ADS)

    Lavrova, Olga; Mityagina, Marina; Bocharova, Tatina

    Results of many-year monitoring of the state of coastal zone based on a multisensor approach are presented. The monitoring is aimed at solving the following tasks: operational mapping of parameters characterizing the state and pollution (coastal, ship and biogenic) of water; analysis of meteorological state and its effect on the drift and spread of pollutants; study of coastal circulation patterns and their impact on the drift and spread of pollutants; deriving typical pollution distribution patterns in the coastal zone.Processing and analysis is performed using data in visual, infrared and microwave ranges from ERS-2 SAR, Envisat ASAR/MERIS, Terra and Aqua MODIS and NOAA AVHRR instruments. These are complimented with ground data from meteorological stations on the shore and results of satellite data processing of previous periods. The main regions of interest are the Russian sectors of the Black and Azov Seas, southeastern part of the Baltic Sea, and northern and central regions of the Caspian Sea. Adjacent coasts are extremely populated and have well-developed industry, agriculture and rapidly growing tourist sectors. The necessity of constant monitoring of the sea state there is obvious.The monitoring activities allow us to accumulate extensive material for the study of hydrodynamic processes in the regions, in particular water circulation. Detailing the occurrence, evolution and drift of smalland meso-scale vortex structures is crucial for the knowledge of the mechanisms determining mixing and circulation processes in the coastal zone. These mechanisms play an important role in ecological, hydrodynamic and meteorological status of a coastal zone. Special attention is paid to the sea surface state in the Kerch Strait, where a tanker catastrophe took place on November 11, 2007 causing a spillage of over 1.5 thousand tons of heavy oil. The Kerch Strait is characterized by a complex current system with current directions changing to their opposites depending on

  16. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    PubMed Central

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  17. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety.

    PubMed

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  18. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety.

    PubMed

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-06-09

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.

  19. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  20. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  1. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    NASA Astrophysics Data System (ADS)

    Due-Hansen, J.; Midtbø, K.; Poppe, E.; Summanwar, A.; Jensen, G. U.; Breivik, L.; Wang, D. T.; Schjølberg-Henriksen, K.

    2012-07-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging.

  2. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    SciTech Connect

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-06-15

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns.

  3. Simultaneous processing of photographic and accelerator array data from sled impact experiment

    NASA Astrophysics Data System (ADS)

    Ash, M. E.

    1982-12-01

    A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.

  4. Low-cost, low-loss microlens arrays fabricated by soft-lithography replication process

    NASA Astrophysics Data System (ADS)

    Kunnavakkam, Madanagopal V.; Houlihan, F. M.; Schlax, M.; Liddle, J. A.; Kolodner, P.; Nalamasu, O.; Rogers, J. A.

    2003-02-01

    This letter describes a soft lithographic approach for fabricating low-cost, low-loss microlens arrays. An accurate negative reproduction (stamp) of an existing high-quality lens surface (master) is made by thermally curing a prepolymer to a silicone elastomer against the master. Fabricating the stamp on a rigid backing plate minimizes distortion of its surface relief. Dispensing a liquid photocurable epoxy loaded to high weight percent with functionalized silica nanoparticles into the features of relief on the mold and then curing this material with UV radiation against a quartz substrate generates a replica lens array. The physical and optical characteristics of the resulting lenses suggest that the approach will be suitable for a range of applications in micro and integrated optics.

  5. AGV trace sensing and processing technology based on RGB color sensor array

    NASA Astrophysics Data System (ADS)

    Xu, Kebao; Zhu, Ping; Wang, Juncheng; Yun, Yuliang

    2009-05-01

    AGV(Automatic Guided Vehicle) is widely used in manufacturing factories, harbors, docks and logistics fields, because of its accurate automatic tracking. An AGV tracking method of detecting trace color based on RGB color sensor is provided here. DR, DG, DB values of trace color are obtained by color sensor, with which hue value denoting trace color characteristic can be calculated. Combined with graph theory algorithm, hue value can be used as a parameter for tracking deviation and branch identification to implement shortest path tracking. In addition, considering discreteness and uncertainty of single sensor in detecting trace information, sensor array is adopted for information fusion to achieve accurate tracking. Compared to tracking trace by single intensity sensor, AGV tracking based on RGB color sensor array has much better trace tracking and branch identification performances on complex roads.

  6. High-performance liquid chromatographic determination with photodiode array detection of ellagic acid in fresh and processed fruits.

    PubMed

    Amakura, Y; Okada, M; Tsuji, S; Tonogai, Y

    2000-10-27

    A high-performance liquid chromatographic (HPLC) procedure based on an isocratic elution with photodiode array detection has been developed for a simple and rapid determination of ellagic acid (EA) in fresh and processed fruits. The homogenized sample was refluxed with methanol and then the extract was refined using a solid-phase cartridge before HPLC. We analyzed EA in 40 kinds of fresh fruits and 11 kinds of processed fruits by the developed method. EA was found in several berries, fueijoa, pineapple and pomegranate. This is the first occurrence of the detection of EA in bayberry, fueijoa and pineapple.

  7. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  8. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka.

  9. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  10. A multi-sensor scenario for coastal surveillance

    NASA Astrophysics Data System (ADS)

    van den Broek, A. C.; van den Broek, S. P.; van den Heuvel, J. C.; Schwering, P. B. W.; van Heijningen, A. W. P.

    2007-10-01

    Maritime borders and coastal zones are susceptible to threats such as drug trafficking, piracy, undermining economical activities. At TNO Defence, Security and Safety various studies aim at improving situational awareness in a coastal zone. In this study we focus on multi-sensor surveillance of the coastal environment. We present a study on improving classification results for small sea surface targets using an advanced sensor suite and a scenario in which a small boat is approaching the coast. A next generation sensor suite mounted on a tower has been defined consisting of a maritime surveillance and tracking radar system, capable of producing range profiles and ISAR imagery of ships, an advanced infrared camera and a laser range profiler. For this suite we have developed a multi-sensor classification procedure, which is used to evaluate the capabilities for recognizing and identifying non-cooperative ships in coastal waters. We have found that the different sensors give complementary information. Each sensor has its own specific distance range in which it contributes most. A multi-sensor approach reduces the number of misclassifications and reliable classification results are obtained earlier compared to a single sensor approach.

  11. Multi-sensor management for data fusion in target tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiaokun; Chen, Genshe; Blasch, Erik; Patrick, Jim; Yang, Chun; Kadar, Ivan

    2009-05-01

    Multi-sensor management for data fusion in target tracking concerns issues of sensor assignment and scheduling by managing or coordinating the use of multiple sensor resources. Since a centralized sensor management technique has a crucial limitation in that the failure of the central node would cause whole system failure, a decentralized sensor management (DSM) scheme is increasingly important in modern multi-sensor systems. DSM is afforded in modern systems through increased bandwidth, wireless communication, and enhanced power. However, protocols for system control are needed to management device access. As game theory offers learning models for distributed allocations of surveillance resources and provides mechanisms to handle the uncertainty of surveillance area, we propose an agent-based negotiable game theoretic approach for decentralized sensor management (ANGADS). With the decentralized sensor management scheme, sensor assignment occurs locally, and there is no central node and thus reduces the risk of whole-system failure. Simulation results for a multi-sensor target-tracking scenario demonstrate the applicability of the proposed approach.

  12. Fiber optic based multisensor to brain neurons in awake animals

    NASA Astrophysics Data System (ADS)

    Shen, Zheng; Lin, Shuzhi

    1995-02-01

    The fibeated broptic-based multisensor was made of quartz optic fiber capillary with an outer diameter of 250 micrometers and an inner diameter of 10 micrometers through two methods: a capillary and the parallel capillary with another piece of optic fiber. There was a thin layer of tungsten membrane (thickness 1 micrometers ) around the outer surface of the optic fiber or capillary. The metal membrane worked as a micro electrode or an electro-osmosis electrode. Nitrogen laser beam and laser-fluorescent pulses were guided in two ways through optic fiber or the wall of the capillary. The advantage of one capillary was its small tip and measurement of different physiological indices in the same site, but the intensity of laser-fluorescent pulses was diminished by electro-osmosis flow; the parallel optic fiber and capillary avoided the shortage, but the device tip was bigger than a capillary. The multisensor was used to inquire into cognitive brain mechanism in awake animals by simultaneous recording of neuron activities (neuron firing), neuron metabolism rate (laser-fluorescent pulses), and biochemical events through microelectrophore in vivo and field effect electro-osmosis analysis at situ. The effects of nitric oxide biosynthesis-related compounds on neuron efficiency of the cortex were investigated by the multisensor.

  13. Determination of urine ionic composition with potentiometric multisensor system.

    PubMed

    Yaroshenko, Irina; Kirsanov, Dmitry; Kartsova, Lyudmila; Sidorova, Alla; Borisova, Irina; Legin, Andrey

    2015-01-01

    The ionic composition of urine is a good indicator of patient's general condition and allows for diagnostics of certain medical problems such as e.g., urolithiasis. Due to environmental factors and malnutrition the number of registered urinary tract cases continuously increases. Most of the methods currently used for urine analysis are expensive, quite laborious and require skilled personnel. The present work deals with feasibility study of potentiometric multisensor system of 18 ion-selective and cross-sensitive sensors as an analytical tool for determination of urine ionic composition. In total 136 samples from patients of Urolithiasis Laboratory and healthy people were analyzed by the multisensor system as well as by capillary electrophoresis as a reference method. Various chemometric approaches were implemented to relate the data from electrochemical measurements with the reference data. Logistic regression (LR) was applied for classification of samples into healthy and unhealthy producing reasonable misclassification rates. Projection on Latent Structures (PLS) regression was applied for quantitative analysis of ionic composition from potentiometric data. Mean relative errors of simultaneous prediction of sodium, potassium, ammonium, calcium, magnesium, chloride, sulfate, phosphate, urate and creatinine from multisensor system response were in the range 3-13% for independent test sets. This shows a good promise for development of a fast and inexpensive alternative method for urine analysis. PMID:25281140

  14. Sensor fusion for hand-held multisensor landmine detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Chander, Venkat S.; Palit, Partha P.; Stanley, Joe; Mitchell, O. Robert

    2001-10-01

    Sensor fusion issues in a streamlined assimilation of multi-sensor information for landmine detection are discussed. In particular multi-sensor fusion in hand-held landmine detection system with ground penetrating radar (GPR) and metal detector sensors is investigated. The fusion architecture consists of feature extraction for individual sensors followed by a feed-forward neural network training to learn the feature space representation of the mine/no-mine classification. A correlation feature from GPR, and slope and energy feature from metal detector are used for discrimination. Various fusion strategies are discussed and results compared against each other and against individual sensors using ROC curves for the available multi-sensor data. Both feature level and decision level fusion have been investigated. Simple decision level fusion scheme based on Dempster-Shafer evidence accumulation, soft AND, MIN and MAX are compared. Feature level fusion using neural network training is shown to provide best results. However comparable performance is achieved using decision level sensor fusion based on Dempster-Shafer accumulation. It is noted that, the above simple feed-forward fusion scheme lacks a means to verify detections after a decision has been made. New detection algorithms that are more than anomaly detectors are needed. Preliminary results with features based on independent component analysis (ICA) show promising results towards this end.

  15. Emerging standards for testing of multisensor mine detectors

    NASA Astrophysics Data System (ADS)

    Dibsdall, Ian M.

    2005-06-01

    The standards relating to testing of metal detectors for demining operations are developing well, including (but not limited to) CEN Working Agreement CWA14747:2003, UNMAS Mine Action Standards and others. However, for developing multisensor mine detectors there is no agreed standard method of testing. ITEP, the International Test and Evaluation Program for Humanitarian Demining, is currently drawing together several nation's experience of testing multisensor mine detectors into a "best practice" document that could be used as the basis for standardised testing. This paper outlines the test methodology used during recent multisensor mine detector tests and discusses the issues that have arisen and lessons learned. In particular, the requirements for suitable targets, careful site preparation, measurement of appropriate environmental factors and methods of maximising useful results with limited resources are highlighted. Most recent tests have used a combined Metal Detector (MD) and Ground Penetrating Radar (GPR), but other sensor systems will be considered. An emerging test methodology is presented, along with an invitation for feedback from other researchers for inclusion into the "best practice" document.

  16. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  17. Hierarchical multisensor analysis for robotic exploration

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi; Majani, Eric

    1991-01-01

    Robotic vehicles for lunar and Mars exploration will carry an array of complex instruments requiring real-time data interpretation and fusion. The system described here uses hierarchical multiresolution analysis of visible and multispectral images to extract information on mineral composition, texture and object shape. This information is used to characterize the site geology and choose interesting samples for acquisition. Neural networks are employed for many data analysis steps. A decision tree progressively integrates information from multiple instruments and performs goal-driven decision making. The system is designed to incorporate more instruments and data types as they become available.

  18. Real-time processing of fast-scan cyclic voltammetry (FSCV) data using a field-programmable gate array (FPGA).

    PubMed

    Bozorgzadeh, Bardia; Covey, Daniel P; Heidenreich, Byron A; Garris, Paul A; Mohseni, Pedram

    2014-01-01

    This paper reports the hardware implementation of a digital signal processing (DSP) unit for real-time processing of data obtained by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM), an electrochemical transduction technique for high-resolution monitoring of brain neurochemistry. Implemented on a field-programmable gate array (FPGA), the DSP unit comprises a decimation filter and an embedded processor to process the oversampled FSCV data and obtain in real time a temporal profile of concentration variation along with a chemical signature to identify the target neurotransmitter. Interfaced with an integrated, FSCV-sensing front-end, the DSP unit can successfully process FSCV data obtained by bolus injection of dopamine in a flow cell as well as electrically evoked, transient dopamine release in the dorsal striatum of an anesthetized rat. PMID:25570384

  19. Real-time processing of fast-scan cyclic voltammetry (FSCV) data using a field-programmable gate array (FPGA).

    PubMed

    Bozorgzadeh, Bardia; Covey, Daniel P; Heidenreich, Byron A; Garris, Paul A; Mohseni, Pedram

    2014-01-01

    This paper reports the hardware implementation of a digital signal processing (DSP) unit for real-time processing of data obtained by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM), an electrochemical transduction technique for high-resolution monitoring of brain neurochemistry. Implemented on a field-programmable gate array (FPGA), the DSP unit comprises a decimation filter and an embedded processor to process the oversampled FSCV data and obtain in real time a temporal profile of concentration variation along with a chemical signature to identify the target neurotransmitter. Interfaced with an integrated, FSCV-sensing front-end, the DSP unit can successfully process FSCV data obtained by bolus injection of dopamine in a flow cell as well as electrically evoked, transient dopamine release in the dorsal striatum of an anesthetized rat.

  20. Light absorption processes and optimization of ZnO/CdTe core-shell nanowire arrays for nanostructured solar cells.

    PubMed

    Michallon, Jérôme; Bucci, Davide; Morand, Alain; Zanuccoli, Mauro; Consonni, Vincent; Kaminski-Cachopo, Anne

    2015-02-20

    The absorption processes of extremely thin absorber solar cells based on ZnO/CdTe core-shell nanowire (NW) arrays with square, hexagonal or triangular arrangements are investigated through systematic computations of the ideal short-circuit current density using three-dimensional rigorous coupled wave analysis. The geometrical dimensions are optimized for optically designing these solar cells: the optimal NW diameter, height and array period are of 200 ± 10 nm, 1-3 μm and 350-400 nm for the square arrangement with CdTe shell thickness of 40-60 nm. The effects of the CdTe shell thickness on the absorption of ZnO/CdTe NW arrays are revealed through the study of two optical key modes: the first one is confining the light into individual NWs, the second one is strongly interacting with the NW arrangement. It is also shown that the reflectivity of the substrate can improve Fabry-Perot resonances within the NWs: the ideal short-circuit current density is increased by 10% for the ZnO/fluorine-doped tin oxide (FTO)/ideal reflector as compared to the ZnO/FTO/glass substrate. Furthermore, the optimized square arrangement absorbs light more efficiently than both optimized hexagonal and triangular arrangements. Eventually, the enhancement factor of the ideal short-circuit current density is calculated as high as 1.72 with respect to planar layers, showing the high optical potentiality of ZnO/CdTe core-shell NW arrays. PMID:25629373

  1. Integration of a detector array with an optical waveguide structure and applications to signal processing

    NASA Astrophysics Data System (ADS)

    Boyd, J. T.; Ramey, D. A.; Chen, C. L.; Naumaan, A.; Dutta, S.

    1981-08-01

    Both planar thin film and channel optical waveguides have been integrated with charge-coupled devices (CCDs). Coupling of light from the waveguide region to the detector elements utilizes a smooth and uniformly-tapered region of SiO2 to minimize scattering. CCd transfer inefficiency of 1.0 times ten to the minus fourth power is consistently obtained for a number of devices. A channel waveguide array formed in a fan-out pattern is introduced as a means of enhancing focal plane resolution in integrated optical devices using optical waveguide lenses. High spatial resolution can thus be obtained without making detector spacings too small, thus avoiding detector problems with regard to fabrication, crosstalk, linearity, and charge transfer inefficiency. Operation of an integrated optical channel waveguide array-CCD transversal filter is reported. Channel waveguides formed in V-grooves couple directly to the sensor elements of the four phase, double polysilicon CCD. Experimental results include a filter transfer function having good agreement with theoretical results. The voltage contrast mode of a scanning electron microscope (SEM) is utilized to observe charge-coupled devices (CCDs) which have been cross sectioned. A new cross sectioning technique which uses anisotropic etching to accurately define the axis along which fracture occurs is presented.

  2. Phonon processes in vertically aligned silicon nanowire arrays produced by low-cost all-solution galvanic displacement method

    NASA Astrophysics Data System (ADS)

    Banerjee, Debika; Trudeau, Charles; Gerlein, Luis Felipe; Cloutier, Sylvain G.

    2016-03-01

    The nanoscale engineering of silicon can significantly change its bulk optoelectronic properties to make it more favorable for device integration. Phonon process engineering is one way to enhance inter-band transitions in silicon's indirect band structure alignment. This paper demonstrates phonon localization at the tip of silicon nanowires fabricated by galvanic displacement using wet electroless chemical etching of a bulk silicon wafer. High-resolution Raman micro-spectroscopy reveals that such arrayed structures of silicon nanowires display phonon localization behaviors, which could help their integration into the future generations of nano-engineered silicon nanowire-based devices such as photodetectors and solar cells.

  3. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to

  4. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  5. True-time-delay transmit/receive optical beam-forming system for phased arrays and other signal processing applications

    NASA Astrophysics Data System (ADS)

    Toughlian, Edward N.; Zamuda, H.; Carter, Charity A.

    1994-06-01

    This paper addresses the problem of dynamic optical processing for the control of phased array antennas. The significant result presented is the demonstration of a continuously variable photonic RF/microwave delay line. Specifically, it is shown that by applying spatial frequency dependent optical phase compensation in an optical heterodyne process, variable RF delay can be achieved over a prescribed frequency band. Experimental results which demonstrate the performance of the delay line with regard to both maximum delay and resolution over a broad bandwidth are presented. Additionally, a spatially integrated optical system is proposed for control of phased array antennas. The integrated system provides mechanical stability, essentially eliminates the drift problems associated with free space optical systems, and can provide high packing density. This approach uses a class of spatial light modulator known as a deformable mirror device and leads to a steerable arbitrary antenna radiation pattern of the true time delay type. Also considered is the ability to utilize the delay line as a general photonic signal processing element in an adaptive (reconfigurable) transversal frequency filter configuration. Such systems are widely applicable in jammer/noise canceling systems, broadband ISDN, spread spectrum secure communications and the like.

  6. Using the internet and multisensor data to map oil traps in south Texas

    SciTech Connect

    Moore, J.; Morgan, K.M.; Donovan, N.; Busbey, A.B.

    1996-12-31

    Hydrocarbon producing structures such as salt domes, serpentine plugs, drape folds, and growth faults exist throughout a large area of south Texas. Surface expressions of known oil producing fields associated with these subsurface features were analyzed with multisensor remote sensing data for anomalous drainage patterns, surface lineations, and geobotanical hydrocarbon alteration. Space Shuttle color imagery and radar data were analyzed and downloaded over the Internet from NASA`s Earth Observation Laboratory at the Johnson Space Center in Houston Texas. The Internet access saved considerable time in researching the archives for image selection. Selected Shuttle Imagery were processed and enhanced for analysis and comparison to Landsat MSS, TM and radar data taken over the exploration study areas. Examples are presented to illustrate the comparisons made among the different sensor types.

  7. A data fusion algorithm for multi-sensor microburst hazard assessment

    NASA Technical Reports Server (NTRS)

    Wanke, Craig R.; Hansman, R. John

    1994-01-01

    A recursive model-based data fusion algorithm for multi-sensor microburst hazard assessment is described. An analytical microburst model is used to approximate the actual windfield, and a set of 'best' model parameters are estimated from measured winds. The winds corresponding to the best parameter set can then be used to compute alerting factors such as microburst position, extent, and intensity. The estimation algorithm is based on an iterated extended Kalman filter which uses the microburst model parameters as state variables. Microburst state dynamic and process noise parameters are chosen based on measured microburst statistics. The estimation method is applied to data from a time-varying computational simulation of a historical microburst event to demonstrate its capabilities and limitations. Selection of filter parameters and initial conditions is discussed. Computational requirements and datalink bandwidth considerations are also addressed.

  8. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab

    NASA Technical Reports Server (NTRS)

    Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.

    2008-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  9. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.

    2007-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  10. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications.

    PubMed

    Wlodarczyk, Krystian L; Bryce, Emma; Schwartz, Noah; Strachan, Mel; Hutson, David; Maier, Robert R J; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Kirk, Katherine; Hand, Duncan P

    2014-02-01

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  11. Primary Dendrite Array: Observations from Ground-Based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Grugel, Richard N.; Erdman, Robert G.; Poirier, David R.

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST). Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K per centimeter (MICAST6) and 28 K per centimeter (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 millimeters per second) and a speed decrease (MICAST7-from 20 to 10 millimeters per second). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  12. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  13. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    SciTech Connect

    Wlodarczyk, Krystian L. Maier, Robert R. J.; Hand, Duncan P.; Bryce, Emma; Hutson, David; Kirk, Katherine; Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Strachan, Mel

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  14. From spin noise to systematics: stochastic processes in the first International Pulsar Timing Array data release

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.

    2016-05-01

    We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.

  15. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  16. NASA 1990 Multisensor Airborne Campaigns (MACs) for ecosystem and watershed studies

    NASA Technical Reports Server (NTRS)

    Wickland, Diane E.; Asrar, Ghassem; Murphy, Robert E.

    1991-01-01

    The Multisensor Airborne Campaign (MAC) focus within NASA's former Land Processes research program was conceived to achieve the following objectives: to acquire relatively complete, multisensor data sets for well-studied field sites, to add a strong remote sensing science component to ecology-, hydrology-, and geology-oriented field projects, to create a research environment that promotes strong interactions among scientists within the program, and to more efficiently utilize and compete for the NASA fleet of remote sensing aircraft. Four new MAC's were conducted in 1990: the Oregon Transect Ecosystem Research (OTTER) project along an east-west transect through central Oregon, the Forest Ecosystem Dynamics (FED) project at the Northern Experimental Forest in Howland, Maine, the MACHYDRO project in the Mahantango Creek watershed in central Pennsylvania, and the Walnut Gulch project near Tombstone, Arizona. The OTTER project is testing a model that estimates the major fluxes of carbon, nitrogen, and water through temperate coniferous forest ecosystems. The focus in the project is on short time-scale (days-year) variations in ecosystem function. The FED project is concerned with modeling vegetation changes of forest ecosystems using remotely sensed observations to extract biophysical properties of forest canopies. The focus in this project is on long time-scale (decades to millenia) changes in ecosystem structure. The MACHYDRO project is studying the role of soil moisture and its regulating effects on hydrologic processes. The focus of the study is to delineate soil moisture differences within a basin and their changes with respect to evapotranspiration, rainfall, and streamflow. The Walnut Gulch project is focused on the effects of soil moisture in the energy and water balance of arid and semiarid ecosystems and their feedbacks to the atmosphere via thermal forcing.

  17. Multisensor airborne imagery collection and processing onboard small unmanned systems

    NASA Astrophysics Data System (ADS)

    Linne von Berg, Dale; Anderson, Scott A.; Bird, Alan; Holt, Niel; Kruer, Melvin; Walls, Thomas J.; Wilson, Michael L.

    2010-04-01

    FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) is an ONR funded effort to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). This program is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL). FEATHAR has developed and integrated EyePod, a combined long-wave infrared (LWIR) and visible to near infrared (VNIR) optical survey & inspection system, with NuSAR, a combined dual band synthetic aperture radar (SAR) system. These sensors are being tested in conjunction with other ground and airborne sensor systems to demonstrate intelligent real-time cross-sensor cueing and in-air data fusion. Results from test flights of the EyePod and NuSAR sensors will be presented.

  18. Sequential growth of zinc oxide nanorod arrays at room temperature via a corrosion process: application in visible light photocatalysis.

    PubMed

    Iqbal, Danish; Kostka, Aleksander; Bashir, Asif; Sarfraz, Adnan; Chen, Ying; Wieck, Andreas D; Erbe, Andreas

    2014-11-12

    Many photocatalyst systems catalyze chemical reactions under ultraviolet (UV) illumination, because of its high photon energies. Activating inexpensive, widely available materials as photocatalyst using the intense visible part of the solar spectrum is more challenging. Here, nanorod arrays of the wide-band-gap semiconductor zinc oxide have been shown to act as photocatalysts for the aerobic photo-oxidation of organic dye Methyl Orange under illumination with red light, which is normally accessible only to narrow-band semiconductors. The homogeneous, 800-1000-nm-thick ZnO nanorod arrays show substantial light absorption (absorbances >1) throughout the visible spectral range. This absorption is caused by defect levels inside the band gap. Multiple scattering processes by the rods make the nanorods appear black. The dominantly crystalline ZnO nanorod structures grow in the (0001) direction, i.e., with the c-axis perpendicular to the surface of polycrystalline zinc. The room-temperature preparation route relies on controlled cathodic delamination of a weakly bound polymer coating from metallic zinc, an industrially produced and cheaply available substrate. Cathodic delamination is a sequential synthesis process, because it involves the propagation of a delamination front over the base material. Consequently, arbitrarily large sample surfaces can be nanostructured using this approach.

  19. Focal plane array with modular pixel array components for scalability

    DOEpatents

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  20. Satellite Data Simulator Unit: A Multisensor, Multispectral Satellite Simulator Package

    NASA Technical Reports Server (NTRS)

    Masunaga, Hirohiko; Matsui, Toshihisa; Tao, Wei-Kuo; Hou, Arthur Y.; Kummerow, Christian D.; Nakajima, Teruyuki; Bauer, Peter; Olson, William S.; Sekiguchi, Miho; Nakajima, Teruyuki

    2010-01-01

    Several multisensor simulator packages are being developed by different research groups across the world. Such simulator packages [e.g., COSP , CRTM, ECSIM, RTTO, ISSARS (under development), and SDSU (this article), among others] share overall aims, although some are targeted more on particular satellite programs or specific applications (for research purposes or for operational use) than others. The SDSU or Satellite Data Simulator Unit is a general-purpose simulator composed of Fortran 90 codes and applicable to spaceborne microwave radiometer, radar, and visible/infrared imagers including, but not limited to, the sensors listed in a table. That shows satellite programs particularly suitable for multisensor data analysis: some are single satellite missions carrying two or more instruments, while others are constellations of satellites flying in formation. The TRMM and A-Train are ongoing satellite missions carrying diverse sensors that observe clouds and precipitation, and will be continued or augmented within the decade to come by future multisensor missions such as the GPM and Earth-CARE. The ultimate goals of these present and proposed satellite programs are not restricted to clouds and precipitation but are to better understand their interactions with atmospheric dynamics/chemistry and feedback to climate. The SDSU's applicability is not technically limited to hydrometeor measurements either, but may be extended to air temperature and humidity observations by tuning the SDSU to sounding channels. As such, the SDSU and other multisensor simulators would potentially contribute to a broad area of climate and atmospheric sciences. The SDSU is not optimized to any particular orbital geometry of satellites. The SDSU is applicable not only to low-Earth orbiting platforms as listed in Table 1, but also to geostationary meteorological satellites. Although no geosynchronous satellite carries microwave instruments at present or in the near future, the SDSU would be

  1. Assessment of bitter taste of pharmaceuticals with multisensor system employing 3 way PLS regression.

    PubMed

    Rudnitskaya, Alisa; Kirsanov, Dmitry; Blinova, Yulia; Legin, Evgeny; Seleznev, Boris; Clapham, David; Ives, Robert S; Saunders, Kenneth A; Legin, Andrey

    2013-04-01

    The application of the potentiometric multisensor system (electronic tongue, ET) for quantification of the bitter taste of structurally diverse active pharmaceutical ingredients (API) is reported. The measurements were performed using a set of bitter substances that had been assessed by a professional human sensory panel and the in vivo rat brief access taste aversion (BATA) model to produce bitterness intensity scores for each substance at different concentrations. The set consisted of eight substances, both inorganic and organic - azelastine, caffeine, chlorhexidine, potassium nitrate, naratriptan, paracetamol, quinine, and sumatriptan. With the aim of enhancing the response of the sensors to the studied APIs, measurements were carried out at different pH levels ranging from 2 to 10, thus promoting ionization of the compounds. This experiment yielded a 3 way data array (samples×sensors×pH levels) from which 3wayPLS regression models were constructed with both human panel and rat model reference data. These models revealed that artificial assessment of bitter taste with ET in the chosen set of API's is possible with average relative errors of 16% in terms of human panel bitterness score and 25% in terms of inhibition values from in vivo rat model data. Furthermore, these 3wayPLS models were applied for prediction of the bitterness in blind test samples of a further set of API's. The results of the prediction were compared with the inhibition values obtained from the in vivo rat model. PMID:23498685

  2. Elastomeric inverse moulding and vacuum casting process characterization for the fabrication of arrays of concave refractive microlenses

    NASA Astrophysics Data System (ADS)

    Desmet, L.; Van Overmeire, S.; Van Erps, J.; Ottevaere, H.; Debaes, C.; Thienpont, H.

    2007-01-01

    We present a complete and precise quantitative characterization of the different process steps used in an elastomeric inverse moulding and vacuum casting technique. We use the latter replication technique to fabricate concave replicas from an array of convex thermal reflow microlenses. During the inverse elastomeric moulding we obtain a secondary silicone mould of the original silicone mould in which the master component is embedded. Using vacuum casting, we are then able to cast out of the second mould several optical transparent poly-urethane arrays of concave refractive microlenses. We select ten particular representative microlenses on the original, the silicone moulds and replica sample and quantitatively characterize and statistically compare them during the various fabrication steps. For this purpose, we use several state-of-the-art and ultra-precise characterization tools such as a stereo microscope, a stylus surface profilometer, a non-contact optical profilometer, a Mach-Zehnder interferometer, a Twyman-Green interferometer and an atomic force microscope to compare various microlens parameters such as the lens height, the diameter, the paraxial focal length, the radius of curvature, the Strehl ratio, the peak-to-valley and the root-mean-square wave aberrations and the surface roughness. When appropriate, the microlens parameter under test is measured with several different measuring tools to check for consistency in the measurement data. Although none of the lens samples shows diffraction-limited performance, we prove that the obtained replicated arrays of concave microlenses exhibit sufficiently low surface roughness and sufficiently high lens quality for various imaging applications.

  3. Global Arrays

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the sharedmore » data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).« less

  4. Investigation of proposed process sequence for the array automated assembly task, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-01-01

    Progress was made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The processes that were retained for the selected process sequence, spin-on diffusion, print and fire aluminum p+ back, clean, print and fire silver front contact and apply tin pad to aluminum back, were evaluated for their cost contribution.

  5. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  6. Fabrication and evaluation of a microspring contact array using a reel-to-reel continuous fiber process

    NASA Astrophysics Data System (ADS)

    Khumpuang, S.; Ohtomo, A.; Miyake, K.; Itoh, T.

    2011-10-01

    In this work a novel patterning technique for fabrication of a conductive microspring array as an electrical contact structure directly on fiber substrate is introduced. Using low-temperature compression from the nanoimprinting technique to generate a gradient depth on the desired pattern, PEDOT: PSS film, the hair-like structures are released as bimorph microspring cantilevers. The microspring is in the form of a stress-engineered cantilever arranged in rows. The microspring contact array is employed in composing the electrical circuit through a large area of woven textile, and functions as the electrical contact between weft ribbon and warp ribbon. The spring itself has a contact resistance of 480 Ω to the plain PEDOT:PSS-coated ribbon, which shows a promising electrical transfer ability within the limitations of materials employed for reel-to-reel continuous processes. The microspring contact structures enhanced the durability, flexibility and stability of electrical contact in the woven textile better than those of the ribbons without the microspring. The contact experiment was repeated over 500 times, losing only 20 Ω of the resistance. Furthermore, to realize the spring structure, CYTOP is used as the releasing layer due to its low adhesive force to the fiber substrate. Moreover the first result of patternable CYTOP using nano-imprinting lithography is included.

  7. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    PubMed Central

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734

  8. A Module Experimental Process System Development Unit (MEPSDU). [development of low cost solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per Watt peak was demonstrated. The proposed process sequence was reviewed and laboratory verification experiments were conducted. The preliminary process includes the following features: semicrystalline silicon (10 cm by 10 cm) as the silicon input material; spray on dopant diffusion source; Al paste BSF formation; spray on AR coating; electroless Ni plate solder dip metallization; laser scribe edges; K & S tabbing and stringing machine; and laminated EVA modules.

  9. Monitoring changes in behaviour from multi-sensor systems.

    PubMed

    Amor, James D; James, Christopher J

    2014-10-01

    Behavioural patterns are important indicators of health status in a number of conditions and changes in behaviour can often indicate a change in health status. Currently, limited behaviour monitoring is carried out using paper-based assessment techniques. As technology becomes more prevalent and low-cost, there is an increasing movement towards automated behaviour-monitoring systems. These systems typically make use of a multi-sensor environment to gather data. Large data volumes are produced in this way, which poses a significant problem in terms of extracting useful indicators. Presented is a novel method for detecting behavioural patterns and calculating a metric for quantifying behavioural change in multi-sensor environments. The data analysis method is shown and an experimental validation of the method is presented which shows that it is possible to detect the difference between weekdays and weekend days. Two participants are analysed, with different sensor configurations and test environments and in both cases, the results show that the behavioural change metric for weekdays and weekend days is significantly different at 95% confidence level, using the methods presented.

  10. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  11. Flat-plate solar array project process development area process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  12. Coal liquefaction process streams characterization and evaluation: High performance liquid chromatography (HPLC) of coal liquefaction process streams using normal-phase separation with uv diode array detection

    SciTech Connect

    Clifford, D.J.; McKinney, D.E.; Hou, Lei; Hatcher, P.G.

    1994-01-01

    This study demonstrated the considerable potential of using two-dimensional, high performance liquid chromatography (HPLC) with normal-phase separation and ultraviolet (UV) diode array detection for the examination of filtered process liquids and the 850{degrees}F{sup {minus}} distillate materials derived from direct coal liquefaction process streams. A commercially available HPLC column (Hypersil Green PAH-2) provided excellent separation of the complex mixture of polynuclear aromatic hydrocarbons (PAHs) found in coal-derived process streams process. Some characteristics of the samples delineated by separation could be attributed to processing parameters. Mass recovery of the process derived samples was low (5--50 wt %). Penn State believes, however, that, improved recovery can be achieved. High resolution mass spectrometry and gas chromatography/mass spectrometry (GC/MS) also were used in this study to characterize the samples and the HPLC fractions. The GC/MS technique was used to preliminarily examine the GC-elutable portion of the samples. The GC/MS data were compared with the data from the HPLC technique. The use of an ultraviolet detector in the HPLC work precludes detecting the aliphatic portion of the sample. The GC/MS allowed for identification and quantification of that portion of the samples. Further development of the 2-D HPLC analytical method as a process development tool appears justified based on the results of this project.

  13. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    NASA Astrophysics Data System (ADS)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  14. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  15. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  16. Multisensor image fusion guidelines in remote sensing

    NASA Astrophysics Data System (ADS)

    Pohl, C.

    2016-04-01

    Remote sensing delivers multimodal and -temporal data from the Earth's surface. In order to cope with these multidimensional data sources and to make the most of them, image fusion is a valuable tool. It has developed over the past few decades into a usable image processing technique for extracting information of higher quality and reliability. As more sensors and advanced image fusion techniques have become available, researchers have conducted a vast amount of successful studies using image fusion. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields - i.e. remote sensing, image fusion and the desired image exploitation processing. From the findings of this research it can be seen that the choice of the appropriate technique, as well as the fine-tuning of the individual parameters of this technique, is crucial. There is still a lack of strategic guidelines due to the complexity and variability of data selection, processing techniques and applications. This paper gives an overview on the state-of-the-art in remote sensing image fusion including sensors and applications. Putting research results in image fusion from the past 15 years into a context provides a new view on the subject and helps other researchers to build their innovation on these findings. Recommendations of experts help to understand further needs to achieve feasible strategies in remote sensing image fusion.

  17. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  18. Field programmable gate array processing for an improved low-light-level imaging system with higher detection sensibility

    NASA Astrophysics Data System (ADS)

    Tang, Hongying; Yu, Zhengtao

    2014-05-01

    The method which employs the frame accumulation and shaped function is effective in low-light-level imaging. However, it has drawbacks of lower imaging speed and complex operation. To optimize the method, we provide the design of an improved low-light-level imaging system with higher detection sensibility. The design is developed specifically for a faster imaging speed based on field programmable gate arrays. It features the use of least-square algorithm and a saw-tooth wave varied light applied to the image sensor. By manipulation of the video signal in synchronous dynamic random access memory, a low-light-level image which was previously undetectable can be estimated. The design simplifies the imaging process and doubles the imaging speed, and makes the system adapted to long range imaging.

  19. Automating the design of image processing pipelines for novel color filter arrays: local, linear, learned (L3) method

    NASA Astrophysics Data System (ADS)

    Tian, Qiyuan; Lansel, Steven; Farrell, Joyce E.; Wandell, Brian A.

    2014-03-01

    The high density of pixels in modern color sensors provides an opportunity to experiment with new color filter array (CFA) designs. A significant bottleneck in evaluating new designs is the need to create demosaicking, denoising and color transform algorithms tuned for the CFA. To address this issue, we developed a method(local, linear, learned or L3) for automatically creating an image processing pipeline. In this paper we describe the L3 algorithm and illustrate how we created a pipeline for a CFA organized as a 2×2 RGB/Wblock containing a clear (W) pixel. Under low light conditions, the L3 pipeline developed for the RGB/W CFA produces images that are superior to those from a matched Bayer RGB sensor. We also use L3 to learn pipelines for other RGB/W CFAs with different spatial layouts. The L3 algorithm shortens the development time for producing a high quality image pipeline for novel CFA designs.

  20. Liquid Chromatography-diode Array Detector-electrospray Mass Spectrometry and Principal Components Analyses of Raw and Processed Moutan Cortex

    PubMed Central

    Deng, Xian-Mei; Yu, Jiang-Yong; Ding, Meng-Jin; Zhao, Ming; Xue, Xing-Yang; Che, Chun-Tao; Wang, Shu-Mei; Zhao, Bin; Meng, Jiang

    2016-01-01

    Background: Raw Moutan Cortex (RMC), derived from the root bark of Paeonia suffruticosa, and Processed Moutan Cortex (PMC) is obtained from RMC by undergoing a stir-frying process. Both of them are indicated for different pharmacodynamic action in traditional Chinese medicine, and they have been used in China and other Asian countries for thousands of years. Objective: To establish a method to study the RMC and PMC, revealing their different chemical composition by fingerprint, qualitative, and quantitative ways. Materials and Methods: High-performance liquid chromatography coupled with diode array detector and electrospray mass spectrometry (HPLC-DAD-ESIMS) were used for the analysis. Therefore, the analytes were separated on an Ultimate TM XB-C18 analytical column (250 mm × 4.6 mm, 5.0 μm) with a gradient elution program by a mobile phase consisting of acetonitrile and 0.1% (v/v) formic acid water solution. The flow rate, injection volume, detection wavelength, and column temperature were set at 1.0 mL/min, 10 μL, 254 nm, and 30°C, respectively. Besides, principal components analysis and the test of significance were applied in data analysis. Results: The results clearly showed a significant difference among RMC and PMC, indicating the significant changes in their chemical compositions before and after the stir-frying process. Conclusion: The HPLC-DAD-ESIMS coupled with chemometrics analysis could be used for comprehensive quality evaluation of raw and processed Moutan Cortex. SUMMARY The experiment study the RMC and PMC by HPLC-DAD-ESIMS couple with chemometrics analysis. The results of their fingerprints, qualitative, and quantitative all clearly showed significant changes in their chemical compositions before and after stir-frying processed. Abbreviation used: HPLC-DAD-ESIMS: High-performance Liquid Chromatography-Diode Array Detector-Electrospray Mass Spectrometry, RMC: Raw moutan cortex, PMC: Processed moutan cortex, TCM: Traditional Chinese medicine

  1. Reaction efficiency of diffusion-controlled processes on finite aperiodic planar arrays. II. Potential effects

    NASA Astrophysics Data System (ADS)

    Garza-López, Roberto A.; Brzezinski, Jack; Low, Daniel; Gomez, Ulysses; Raju, Swaroop; Ramirez, Craig; Kozak, John J.

    2009-08-01

    We continue our study of diffusion-reaction processes on finite aperiodic lattices, viz., the Penrose lattice and a Girih tiling. Focusing on bimolecular reactions, we mobilize the theory of finite Markov processes to document the effect of attractive forces on the reaction efficiency. Considering both a short-range square-well potential and a longer-range 1/ r S ( S = 4, 6) potential, we find that irreversible reactive encounters between reactants on a Girih platelet are kinetically advantaged relative to processes on a Penrose platelet. This result generalizes the conclusion reached in our earlier study [Roberto A. Garza-López, Aaron Kaufman, Reena Patel, Joseph Chang, Jack Brzezinski, John J. Kozak, Chem. Phys. Lett. 459 (2008) 137] where entropic factors (only) were assessed.

  2. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    NASA Astrophysics Data System (ADS)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  3. Coherent-subspace array processing based on wavelet covariance: an application to broad-band, seismo-volcanic signals

    NASA Astrophysics Data System (ADS)

    Saccorotti, G.; Nisii, V.; Del Pezzo, E.

    2008-07-01

    Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP

  4. Processing of translational and rotational motions of surface waves: performance analysis and applications to single sensor and to array measurements

    NASA Astrophysics Data System (ADS)

    Maranò, Stefano; Fäh, Donat

    2014-01-01

    The analysis of rotational seismic motions has received considerable attention in the last years. Recent advances in sensor technologies allow us to measure directly the rotational components of the seismic wavefield. Today this is achieved with improved accuracy and at an affordable cost. The analysis and the study of rotational motions are, to a certain extent, less developed than other aspects of seismology due to the historical lack of instrumental observations. This is due to both the technical challenges involved in measuring rotational motions and to the widespread belief that rotational motions are insignificant. This paper addresses the joint processing of translational and rotational motions from both the theoretical and the practical perspectives. Our attention focuses on the analysis of motions of both Rayleigh waves and Love waves from recordings of single sensors and from an array of sensors. From the theoretical standpoint, analysis of Fisher information (FI) allows us to understand how the different measurement types contribute to the estimation of quantities of geophysical interest. In addition, we show how rotational measurements resolve ambiguity on parameter estimation in the single sensor setting. We quantify the achievable estimation accuracy by means of Cramér-Rao bound (CRB). From the practical standpoint, a method for the joint processing of rotational and translational recordings to perform maximum likelihood (ML) estimation is presented. The proposed technique estimates parameters of Love waves and Rayleigh waves from single sensor or array recordings. We support and illustrate our findings with a comprehensive collection of numerical examples. Applications to real recordings are also shown.

  5. An Array Processing Theory of Memory, Thought, and Behavior Patterning: A Radically Reconstructive View.

    ERIC Educational Resources Information Center

    Allison, Dennis J.

    A theory of memory is introduced, which seeks to respond to the shortcomings of existing theories based on metaphors. Memory is presented as a mechanism, a comparison process in which information held in some form of immediate storage (whether based on perception or previous cognition or both) is compared to previously stored long-term storage.…

  6. Multi-sensor Evolution Analysis system: how WCS/WCPS technology supports real time exploitation of geospatial data

    NASA Astrophysics Data System (ADS)

    Natali, Stefano; Mantovani, Simone; Folegani, Marco; Barboni, Damiano

    2014-05-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In the third and last year of EarthServer project, the Climate Data Service lighthouse application has been released in its full / consolidated mode. The Multi-sensor Evolution Analysis (MEA) system, the geospatial data analysis tool empowered with OGC standard, has been adopted to handle data manipulation and visualization; Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) are used to access and process ESA, NASA and third party products. Tenth of Terabytes of full-mission, multi-sensor, multi-resolution, multi-projection and cross-domain coverages are already available to user interest groups belonging Land, Ocean and Atmosphere products. The MEA system is available at https://mea.eo.esa.int. During the live demo, typical test cases implemented by User interest Groups within EarthServer and ESA Image Information Mining projects will be showed with special emphasis on the comparison of MACC Reanalysis and ESA CCI products.

  7. Microfluidic chemical processing with on-chip washing by deterministic lateral displacement arrays with separator walls

    PubMed Central

    Chen, Yu; D'Silva, Joseph; Austin, Robert H.; Sturm, James C.

    2015-01-01

    We describe a microfluidic device for on-chip chemical processing, such as staining, and subsequent washing of cells. The paper introduces “separator walls” to increase the on-chip incubation time and to improve the quality of washing. Cells of interest are concentrated into a treatment stream of chemical reagents at the first separator wall for extended on-chip incubation without causing excess contamination at the output due to diffusion of the unreacted treatment chemicals, and then are directed to the washing stream before final collections. The second separator wall further reduces the output contamination from diffusion to the washing stream. With this approach, we demonstrate on-chip leukocyte staining with Rhodamine 6G and washing. The results suggest that other conventional biological and analytical processes could be replaced by the proposed device. PMID:26396659

  8. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  9. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  10. Multisensor cargo bay fire detection system

    NASA Astrophysics Data System (ADS)

    Snyder, Brian L.; Anderson, Kaare J.; Renken, Christopher H.; Socha, David M.; Miller, Mark S.

    2004-08-01

    Current aircraft cargo bay fire detection systems are generally based on smoke detection. Smoke detectors in modern aircraft are predominately photoelectric particle detectors that reliably detect smoke, but also detect dust, fog, and most other small particles. False alarms caused by these contaminants can be very costly to the airlines because they can cause flights to be diverted needlessly. To minimize these expenses, a new approach to cargo bay fire detection is needed. This paper describes a novel fire detection system developed by the Goodrich Advanced Sensors Technical Center. The system uses multiple sensors of different technologies to provide a way of discriminating between real fire events and false triggers. The system uses infrared imaging along with multiple, distributed chemical sensors and smoke detectors, all feeding data to a digital signal processor. The processor merges data from the chemical sensors, smoke detectors, and processed images to determine if a fire (or potential fire) is present. Decision algorithms look at all this data in real-time and make the final decision about whether a fire is present. In the paper, we present a short background of the problem we are solving, the reasons for choosing the technologies used, the design of the system, the signal processing methods and results from extensive system testing. We will also show that multiple sensing technologies are crucial to reducing false alarms in such systems.

  11. Research on detection method of end gap of piston rings based on area array CCD and image processing

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Wang, Zhong; Liu, Qi; Li, Lin

    2012-01-01

    Piston ring is one of the most important parts in internal combustion engine, and the width of end gap is an important parameter which should be detected one by one. In comparison to the previous measurements of end gap, a new efficient detection method is presented based on computer vision and image processing theory. This paper describes the framework and measuring principle of the measurement system. In which, the image processing algorithm is highlighted. Firstly, the partial end gap image of piston ring is acquired by the area array CCD; secondly, the end gap edge contour which is connected by single pixel is obtained by grayscale threshold segmentation, mathematical morphology contour edge detection, contour trace and other image processing tools; finally, the distance between the two end gap edge contour lines is calculated by using the least distance method of straight-line fitting. It has been proved by the repetitive experiments that the measurement accuracy can reach 0.01mm. What's more, the detection efficiency of automatic inspected instrument on parameters of piston ring based on this method can reach 10~12 pieces/min.

  12. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  13. A method for improving the pose accuracy of a robot manipulator based on multi-sensor combined measurement and data fusion.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%~78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 x 0.8 x 1 ~ 2 x 0.8 x 1 m in the field of view (FOV) is indicated by the experimental results.

  14. Chemometric analysis of multisensor hyperspectral images of precipitated atmospheric particulate matter.

    PubMed

    Ofner, Johannes; Kamilli, Katharina A; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-09-15

    The chemometric analysis of multisensor hyperspectral data allows a comprehensive image-based analysis of precipitated atmospheric particles. Atmospheric particulate matter was precipitated on aluminum foils and analyzed by Raman microspectroscopy and subsequently by electron microscopy and energy dispersive X-ray spectroscopy. All obtained images were of the same spot of an area of 100 × 100 μm(2). The two hyperspectral data sets and the high-resolution scanning electron microscope images were fused into a combined multisensor hyperspectral data set. This multisensor data cube was analyzed using principal component analysis, hierarchical cluster analysis, k-means clustering, and vertex component analysis. The detailed chemometric analysis of the multisensor data allowed an extensive chemical interpretation of the precipitated particles, and their structure and composition led to a comprehensive understanding of atmospheric particulate matter.

  15. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    NASA Technical Reports Server (NTRS)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  16. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  17. Earthquake processes and geologic structure of the San Andreas Fault at Parkfield through the SAFOD seismic array

    NASA Astrophysics Data System (ADS)

    Chavarria, Juan Andres

    The San Andreas Fault Observatory at Depth (SAFOD) has the goal of understanding earthquake processes at hypocentral depths. In July 2002 Duke University installed a vertical array of seismometers in the SAFOD Pilot Hole (PH). Seismograms recorded by the array give insights into the structure of the SAFOD site. The ratios of P- and S-wave velocities (Vp/Vs) along the array suggest the presence of two faults intersecting the PH. The Vp/Vs ratios also depend on source location, with high values for sources to the northwest along the San Andreas, and lower ones to the southeast. This distribution correlates with high and low creep rates along the SAF. Since higher Vp/Vs ratios can be produced by increasing fluid saturation, this effect could be the one guiding the frequent seismicity and creep along this segment of the fault. The SAFOD PH Vertical Seismic Profiling-seismograms from nearby microearthquake and explosion sources also contain secondary signals between the P- and S-waves. These signals are shown to be P and S waves scattered by the local structure. Kirchhoff migration was applied to define the origin points of these scattered signals. Both 2D and 3D analysis of microearthquake and explosion seismograms showed that the collected scattering points form planar surfaces, interpreted as a vertical San Andreas Fault and four other secondary faults forming a flower structure. These structures along with seismicity located in secondary fault strands suggest that stresses along the San Andreas at Parkfield could be distributed in more complex ways, modifying the local earthquake cycle. Modeling of scattered phases indicates strong geologic contrasts that have recently been drilled by SAFOD. A granite-sediment interface may constitute the boundary of a hanging block with sedimentary materials with low electrical resistivities. Shallow earthquakes at Parkfield take place at the interface of the northeastern boundary of this block, adjacent to the San Andreas Fault

  18. ALLFlight: multisensor data fusion for helicopter operations

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Lueken, T.

    2010-04-01

    The objective of the project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is to demonstrate and evaluate the characteristics of different sensors for helicopter operations within degraded visual environments, such as brownout or whiteout. The sensor suite, which is mounted onto DLR's research helicopter EC135 consists of standard color or black and white TV cameras, an un-cooled thermal infrared camera (EVS-1000, Max-Viz, USA), an optical radar scanner (HELLAS-W, EADS, Germany; a millimeter wave radar system (AI-130, ICx Radar Systems, Canada). Data processing is designed and realized by a sophisticated, high performance sensor co-computer (SCC) cluster architecture, which is installed into the helicopter's experimental electronic cargo bay. This paper describes applied methods and the software architecture in terms of real time data acquisition, recording, time stamping and sensor data fusion. First concepts for a pilot HMI are presented as well.

  19. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  20. Flat-plate solar array project process development area, process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1984-01-01

    The program is designed to investigate the fabrication of solar cells on N-type base material by a simultaneous diffusion of N-type and P-type dopants to form an P(+)NN(+) structure. The results of simultaneous diffusion experiments are being compared to cells fabricated using sequential diffusion of dopants into N-base material in the same resistivity range. The process used for the fabrication of the simultaneously diffused P(+)NN(+) cells follows the standard Westinghouse baseline sequence for P-base material except that the two diffusion processes (boron and phosphorus) are replaced by a single diffusion step. All experiments are carried out on N-type dendritic web grown in the Westinghouse pre-pilot facility. The resistivities vary from 0.5 (UC OMEGA)cm to 5 (UC OMEGA)cm. The dopant sources used for both the simultaneous and sequential diffusion experiments are commercial metallorganic solutions with phosphorus or boron components. After these liquids are applied to the web surface, they are baked to form a hard glass which acts as a diffusion source at elevated temperatures. In experiments performed thus far, cells produced in sequential diffusion tests have properties essentially equal to the baseline N(+)PP(+) cells. However, the simultaneous diffusions have produced cells with much lower IV characteristics mainly due to cross-doping of the sources at the diffusion temperature. This cross-doping is due to the high vapor pressure phosphorus (applied as a metallorganic to the back surface) diffusion through the SiO2 mask and then acting as a diffusant source for the front surface.

  1. Direct growth of comet-like superstructures of Au-ZnO submicron rod arrays by solvothermal soft chemistry process

    SciTech Connect

    Shen Liming; Bao, Ningzhong Yanagisawa, Kazumichi; Zheng, Yanqing; Domen, Kazunari; Gupta, Arunava; Grimes, Craig A.

    2007-01-15

    The synthesis, characterization and proposed growth process of a new kind of comet-like Au-ZnO superstructures are described here. This Au-ZnO superstructure was directly created by a simple and mild solvothermal reaction, dissolving the reactants of zinc acetate dihydrate and hydrogen tetrachloroaurate tetrahydrate (HAuCl{sub 4}.4H{sub 2}O) in ethylenediamine and taking advantage of the lattice matching growth between definitized ZnO plane and Au plane and the natural growth habit of the ZnO rods along [001] direction in solutions. For a typical comet-like Au-ZnO superstructure, its comet head consists of one hemispherical end of a central thick ZnO rod and an outer Au-ZnO thin layer, and its comet tail consists of radially standing ZnO submicron rod arrays growing on the Au-ZnO thin layer. These ZnO rods have diameters in range of 0.2-0.5 {mu}m, an average aspect ratio of about 10, and lengths of up to about 4 {mu}m. The morphology, size and structure of the ZnO superstructures are dependent on the concentration of reactants and the reaction time. The HAuCl{sub 4}.4H{sub 2}O plays a key role for the solvothermal growth of the comet-like superstructure, and only are ZnO fibers obtained in absence of the HAuCl{sub 4}.4H{sub 2}O. The UV-vis absorption spectrum shows two absorptions at 365-390 nm and 480-600 nm, respectively attributing to the characteristic of the ZnO wide-band semiconductor material and the surface plasmon resonance of the Au particles. - Graphical abstract: One-step solvothermal synthesis of novel comet-like superstructures of radially standing ZnO submicron rod arrays.

  2. Regional Drought Monitoring Based on Multi-Sensor Remote Sensing

    NASA Astrophysics Data System (ADS)

    Rhee, Jinyoung; Im, Jungho; Park, Seonyoung

    2014-05-01

    Drought originates from the deficit of precipitation and impacts environment including agriculture and hydrological resources as it persists. The assessment and monitoring of drought has traditionally been performed using a variety of drought indices based on meteorological data, and recently the use of remote sensing data is gaining much attention due to its vast spatial coverage and cost-effectiveness. Drought information has been successfully derived from remotely sensed data related to some biophysical and meteorological variables and drought monitoring is advancing with the development of remote sensing-based indices such as the Vegetation Condition Index (VCI), Vegetation Health Index (VHI), and Normalized Difference Water Index (NDWI) to name a few. The Scaled Drought Condition Index (SDCI) has also been proposed to be used for humid regions proving the performance of multi-sensor data for agricultural drought monitoring. In this study, remote sensing-based hydro-meteorological variables related to drought including precipitation, temperature, evapotranspiration, and soil moisture were examined and the SDCI was improved by providing multiple blends of the multi-sensor indices for different types of drought. Multiple indices were examined together since the coupling and feedback between variables are intertwined and it is not appropriate to investigate only limited variables to monitor each type of drought. The purpose of this study is to verify the significance of each variable to monitor each type of drought and to examine the combination of multi-sensor indices for more accurate and timely drought monitoring. The weights for the blends of multiple indicators were obtained from the importance of variables calculated by non-linear optimization using a Machine Learning technique called Random Forest. The case study was performed in the Republic of Korea, which has four distinct seasons over the course of the year and contains complex topography with a variety

  3. Development of a process for high capacity arc heater production of silicon for solar arrays

    NASA Technical Reports Server (NTRS)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  4. Procrustes algorithm for multisensor track fusion

    NASA Astrophysics Data System (ADS)

    Fernandez, Manuel F.; Aridgides, Tom; Evans, John S., Jr.

    1990-10-01

    The association or "fusion" of multiple-sensor reports allows the generation of a highly accurate description of the environment by enabling efficient compression and processing of otherwise unwieldy quantities of data. Assuming that the observations from each sensor are aligned in feature space and in time, this association procedure may be executed on the basis of how well each sensor's vectors of observations match previously fused tracks. Unfortunately, distance-based algorithms alone do not suffice in those situations where match-assignments are not of an obvious nature (e.g., high target density or high false alarm rate scenarios). Our proposed approach is based on recognizing that, together, the sensors' observations and the fused tracks span a vector subspace whose dimensionality and singularity characteristics can be used to determine the total number of targets appearing across sensors. A properly constrained transformation can then be found which aligns the subspaces spanned individually by the observations and by the fused tracks, yielding the relationship existing between both sets of vectors ("Procrustes Problem"). The global nature of this approach thus enables fusing closely-spaced targets by treating them--in a manner analogous to PDA/JPDA algorithms - as clusters across sensors. Since our particular version of the Procrustes Problem consists basically of a minimization in the Total Least Squares sense, the resulting transformations associate both observations-to-tracks and tracks-to--observations. This means that the number of tracks being updated will increase or decrease depending on the number of targets present, automatically initiating or deleting "fused" tracks as required, without the need of ancillary procedures. In addition, it is implicitly assumed that both the tracker filters' target trajectory models and the sensors' observations are "noisy", yielding an algorithm robust even against maneuvering targets. Finally, owing to the fact

  5. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    NASA Technical Reports Server (NTRS)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  6. Background Subtraction for Automated Multisensor Surveillance: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Cristani, Marco; Farenzena, Michela; Bloisi, Domenico; Murino, Vittorio

    2010-12-01

    Background subtraction is a widely used operation in the video surveillance, aimed at separating the expected scene (the background) from the unexpected entities (the foreground). There are several problems related to this task, mainly due to the blurred boundaries between background and foreground definitions. Therefore, background subtraction is an open issue worth to be addressed under different points of view. In this paper, we propose a comprehensive review of the background subtraction methods, that considers also channels other than the sole visible optical one (such as the audio and the infrared channels). In addition to the definition of novel kinds of background, the perspectives that these approaches open up are very appealing: in particular, the multisensor direction seems to be well-suited to solve or simplify several hoary background subtraction problems. All the reviewed methods are organized in a novel taxonomy that encapsulates all the brand-new approaches in a seamless way.

  7. MIST Final Report: Multi-sensor Imaging Science and Technology

    SciTech Connect

    Lind, Michael A.; Medvick, Patricia A.; Foley, Michael G.; Foote, Harlan P.; Heasler, Patrick G.; Thompson, Sandra E.; Nuffer, Lisa L.; Mackey, Patrick S.; Barr, Jonathan L.; Renholds, Andrea S.

    2008-03-15

    The Multi-sensor Imaging Science and Technology (MIST) program was undertaken to advance exploitation tools for Long Wavelength Infra Red (LWIR) hyper-spectral imaging (HSI) analysis as applied to the discovery and quantification of nuclear proliferation signatures. The program focused on mitigating LWIR image background clutter to ease the analyst burden and enable a) faster more accurate analysis of large volumes of high clutter data, b) greater detection sensitivity of nuclear proliferation signatures (primarily released gasses) , and c) quantify confidence estimates of the signature materials detected. To this end the program investigated fundamental limits and logical modifications of the more traditional statistical discovery and analysis tools applied to hyperspectral imaging and other disciplines, developed and tested new software incorporating advanced mathematical tools and physics based analysis, and demonstrated the strength and weaknesses of the new codes on relevant hyperspectral data sets from various campaigns. This final report describes the content of the program and the outlines the significant results.

  8. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  9. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2016-10-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multi-component, multi-station noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  10. Global optimization for multisensor fusion in seismic imaging

    SciTech Connect

    Barhen, J.; Protopopescu, V.; Reister, D.

    1997-06-01

    The accurate imaging of subsurface structures requires the fusion of data collected from large arrays of seismic sensors. The fusion process is formulated as an optimization problem and yields an extremely complex energy surface. Due to the very large number of local minima to be explored and escaped from, the seismic imaging problem has typically been tackled with stochastic optimization methods based on Monte Carlo techniques. Unfortunately, these algorithms are very cumbersome and computationally intensive. Here, the authors present TRUST--a novel deterministic algorithm for global optimization that they apply to seismic imaging. The excellent results demonstrate that TRUST may provide the necessary breakthrough to address major scientific and technological challenges in fields as diverse as seismic modeling, process optimization, and protein engineering.

  11. Air Enquirer's multi-sensor boxes as a tool for High School Education and Atmospheric Research

    NASA Astrophysics Data System (ADS)

    Morguí, Josep-Anton; Font, Anna; Cañas, Lidia; Vázquez-García, Eusebi; Gini, Andrea; Corominas, Ariadna; Àgueda, Alba; Lobo, Agustin; Ferraz, Carlos; Nofuentes, Manel; Ulldemolins, Delmir; Roca, Alex; Kamnang, Armand; Grossi, Claudia; Curcoll, Roger; Batet, Oscar; Borràs, Silvia; Occhipinti, Paola; Rodó, Xavier

    2016-04-01

    An educational tool was designed with the aim of making more comprehensive the research done on Greenhouse Gases (GHGs) in the ClimaDat Spanish network of atmospheric observation stations (www.climadat.es). This tool is called Air Enquirer and it consist of a multi-sensor box. It is envisaged to build more than two hundred boxes to yield them to the Spanish High Schools through the Education department (www.educaixa.com) of the "Obra Social 'La Caixa'", who funds this research. The starting point for the development of the Air Enquirers was the experience at IC3 (www.ic3.cat) in the CarboSchools+ FP7 project (www.carboschools.cat, www.carboschools.eu). The Air Enquirer's multi-sensor box is based in Arduino's architecture and contains sensors for CO2, temperature, relative humidity, pressure, and both infrared and visible luminance. The Air Enquirer is designed for taking continuous measurements. Every Air Enquirer ensemble of measurements is used to convert values to standard units (water content in ppmv, and CO2 in ppmv_dry). These values are referred to a calibration made with Cavity Ring Down Spectrometry (Picarro®) under different temperature, pressure, humidity and CO2 concentrations. Multiple sets of Air Enquirers are intercalibrated for its use in parallel during the experiments. The different experiments proposed to the students will be outdoor (observational) or indoor (experimental, in the lab) focusing on understanding the biogeochemistry of GHGs in the ecosystems (mainly CO2), the exchange (flux) of gases, the organic matter production, respiration and decomposition processes, the influence of the anthropogenic activities on the gases (and particles) exchanges, and their interaction with the structure and composition of the atmosphere (temperature, water content, cooling and warming processes, radiative forcing, vertical gradients and horizontal patterns). In order to ensure Air Enquirers a high-profile research performance the experimental designs

  12. Advantages and Challenges in using Multi-Sensor Data for Studying Aerosols from Space

    NASA Astrophysics Data System (ADS)

    Leptoukh, Gregory

    We are living now in the golden era of numerous sensors measuring aerosols from space, e.g., MODIS, MISR, MERIS, OMI, POLDER, etc. Data from multiple sensors provide a more complete coverage of physical phenomena than data from a single sensor. These sensors are rather different from each other, are sensitive to various parts of the atmosphere, use different aerosol models and treat surface differently when retrieving aerosols. However, they complement each other thus providing more information about spatial, vertical and temporal distribution of aerosols. In addition to differences in instrumentation, retrieval algorithms and calibration, there are quite substantial differences in processing algorithms from Level 0 up to Level 3 and 4. Some of these differences in processing steps, at times not well documented and not widely known by users, can lead to quite significant differences in final products. Without documenting all the steps leading to the final product, data users will not trust the data and/or may use data incorrectly. Data by themselves without quality assessment and provenance are not sufficient to make accurate scientific conclusions. In this paper we provide examples of striking differences between aerosol optical depth data from MODIS, MISR, and MERIS that can be attributed to differences in a certain threshold, aggregation methods, and the dataday definition. We talk about challenges in developing processing provenance. Also, we address issues of harmonization of data, quality and provenance that is needed to guide the multi-sensor data usage and avoid apples-to-oranges comparison and fusion.

  13. Magnetic arrays

    DOEpatents

    Trumper, David L.; Kim, Won-jong; Williams, Mark E.

    1997-05-20

    Electromagnet arrays which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness.

  14. Magnetic arrays

    DOEpatents

    Trumper, D.L.; Kim, W.; Williams, M.E.

    1997-05-20

    Electromagnet arrays are disclosed which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness. 12 figs.

  15. A Comparison of Multisensor Precipitation Estimation Methods in Complex Terrain for Flash Flood Warning and Mitigation

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Chen, H.; Chandrasekar, C. V.; Willie, D.; Reynolds, D.; Campbell, C.; Zhang, Y.; Sukovich, E.

    2012-12-01

    Investigating the uncertainties and improving the accuracy of quantitative precipitation estimation (QPE) is a critical mission of the National Oceanic and Atmospheric Administration (NOAA). QPE is extremely challenging in regions of complex terrain like the western U.S. because of the sparse coverage of ground-based radar, complex orographic precipitation processes, and the effects of beam blockages (e.g., Westrick et al. 1999). In addition, the rain gauge density in complex terrain is often inadequate to capture spatial variability in the precipitation patterns. The NOAA Hydrometeorology Testbed (HMT) conducts research on precipitation and weather conditions that can lead to flooding, and fosters transition of scientific advances and new tools into forecasting operations (see hmt.noaa.gov). The HMT program consists of a series of demonstration projects in different geographical regions to enhance understanding of region specific processes related to precipitation, including QPE. There are a number of QPE systems that are widely used across NOAA for precipitation estimation (e.g., Cifelli et al. 2011; Chandrasekar et al. 2012). Two of these systems have been installed at the NOAA Earth System Research Laboratory: Multisensor Precipitation Estimator (MPE) and National Mosaic and Multi-sensor QPE (NMQ) developed by NWS and NSSL, respectively. Both provide gridded QPE products that include radar-only, gauge-only and gauge-radar-merged, etc; however, these systems often provide large differences in QPE (in terms of amounts and spatial patterns) due to differences in Z-R selection, vertical profile of reflectivity correction, and gauge interpolation procedures. Determining the appropriate QPE product and quantification of QPE uncertainty is critical for operational applications, including water management decisions and flood warnings. For example, hourly QPE is used to correct radar based rain rates used by the Flash Flood Monitoring and Prediction (FFMP) package in

  16. The Canadian Forces ILDS: a militarily fielded multisensor vehicle-mounted teleoperated landmine detection system

    NASA Astrophysics Data System (ADS)

    McFee, John E.; Russell, Kevin L.; Chesney, Robert H.; Faust, Anthony A.; Das, Yogadhish

    2006-05-01

    The Improved Landmine Detection System (ILDS) is intended to meet Canadian military mine clearance requirements in rear area combat situations and peacekeeping on roads and tracks. The system consists of two teleoperated vehicles and a command vehicle. The teleoperated protection vehicle precedes, clearing antipersonnel mines and magnetic and tilt rod-fuzed antitank mines. It consists of an armoured personnel carrier with a forward looking infrared imager, a finger plow or roller and a magnetic signature duplicator. The teleoperated detection vehicle follows to detect antitank mines. The purpose-built vehicle carries forward looking infrared and visible imagers, a 3 m wide, down-looking sensitive electromagnetic induction detector array and a 3 m wide down-looking ground probing radar, which scan the ground in front of the vehicle. Sensor information is combined using navigation sensors and custom navigation, registration, spatial correspondence and data fusion algorithms. Suspicious targets are then confirmed by a thermal neutron activation detector. The prototype, designed and built by Defence R&D Canada, was completed in October 1997. General Dynamics Canada delivered four production units, based on the prototype concept and technologies, to the Canadian Forces (CF) in 2002. ILDS was deployed in Afghanistan in 2003, making the system the first militarily fielded, teleoperated, multi-sensor vehicle-mounted mine detector and the first with a fielded confirmation sensor. Performance of the prototype in Canadian and independent US trials is summarized and recent results from the production version of the confirmation sensor are discussed. CF operations with ILDS in Afghanistan are described.

  17. Chemometric analysis of multi-sensor hyperspectral images of coarse mode aerosol particles for the image-based investigation on aerosol particles

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Kamilli, Katharina A.; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-04-01

    Multi-sensor hyperspectral imaging is a novel technique, which allows the determination of composition, chemical structure and pure components of laterally resolved samples by chemometric analysis of different hyperspectral datasets. These hyperspectral datasets are obtained by different imaging methods, analysing the same sample spot and superimposing the hyperspectral data to create a single multi-sensor dataset. Within this study, scanning electron microscopy (SEM), Raman and energy-dispersive X-ray spectroscopy (EDX) images were obtained from size-segregated aerosol particles, sampled above Western Australian salt lakes. The particles were collected on aluminum foils inside a 2350 L Teflon chamber using a Sioutas impactor, sampling aerosol particles of sizes between 250 nm and 10 µm. The complex composition of the coarse-mode particles can be linked to primary emissions of inorganic species as well as to oxidized volatile organic carbon (VOC) emissions. The oxidation products of VOC emissions are supposed to form an ultra-fine nucleation mode, which was observed during several field campaigns between 2006 and 2013. The aluminum foils were analysed using chemical imaging and electron microscopy. A Horiba LabRam 800HR Raman microscope was used for vibrational mapping of an area of about 100 µm x 100 µm of the foils at a resolution of about 1 µm. The same area was analysed using a Quanta FEI 200 electron microscope (about 250 nm resolution). In addition to the high-resolution image, the elemental composition could be investigated using energy-dispersive X-ray spectroscopy. The obtained hyperspectral images were combined into a multi-sensor dataset using the software package Imagelab (Epina Software Labs, www.imagelab.at). After pre-processing of the images, the multi-sensor hyperspectral dataset was analysed using several chemometric methods such as principal component analysis (PCA), hierarchical cluster analysis (HCA) and other multivariate methods. Vertex

  18. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  19. Multi-sensor image interpretation using laser radar and thermal images

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau; Aggarwal, J. K.

    1991-03-01

    A knowledge based system is presented which interprets registered laser radar and thermal images. The object is to detect and recognize man-made objects at kilometer range in outdoor scenes. The multisensor fusion approach is applied to various sensing modalities (range, intensity, velocity, and thermal) to improve both image segmentation and interpretation. The ability to use multiple sensors greatly helps an intelligent platform to understand and interact with its environment. The knowledge-based interpretation system, AIMS, is constructed using KEE and Lisp. Low-level attributes of image segments (regions) are computed by the segmentation modules and then converted into the KEE format. The interpretation system applies forward chaining in a bottom-up fashion to derive object-level interpretations from data bases generated by low-level processing modules. Segments are grouped into objects and then objects are classified into predefined categories. AIMS employs a two tiered software structure. The efficiency of AIMS is enhanced by transferring nonsymbolic processing tasks to a concurrent service manager (program). Therefore, tasks with different characteristics are executed using different software tools and methodologies.

  20. Density of states of short channel amorphous In-Ga-Zn-O thin-film transistor arrays fabricated using manufacturable processes

    NASA Astrophysics Data System (ADS)

    Kim, Soo Chang; Kim, Young Sun; Kanicki, Jerzy

    2015-05-01

    The effect of temperature on the electrical characteristics of the short channel amorphous In-Ga-Zn-O (a-IGZO) thin-film transistor (TFT) arrays fabricated using manufacturable processes was investigated. This work shows that the fabricated TFT arrays are acceptable and stable enough for manufacturing of the ultra high definition (UHD) active matrix liquid crystal displays in size larger than 55 in. We observed that studied a-IGZO TFT arrays obeyed the Meyer-Neldel (MN) rule over a broad range of gate bias voltages. The MN rule and exponential subgap density of states (DOS) model were combined to extract the DOS distribution for the investigated a-IGZO TFT arrays. The results were consistent with the previous works on single a-IGZO TFTs. This study demonstrates that this method of DOS extraction can be applied to both single devices and arrays, and is reproducible from lab to lab. We believe that this approach of DOS extraction is useful for further development of UHD flat panel display technology.

  1. Microdischarge arrays

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui

    Microhollow cathode discharges (MHCDs) are DC or pulsed gas discharges between two electrodes, separated by a dielectric, and containing a concentric hole. The diameter of the hole, in this hollow cathode configuration, is in the hundred-micrometer range. MHCDs satisfy the two conditions necessary for an efficient excimer radiation sources: (1) high energy electrons which are required to provide a high concentration of excited or ionized rare gas atoms; (2) high pressure operation which favors excimer formation (a three-body process). Flat panel excimer sources require parallel operation of MHCDs. Based on the current-voltage characteristics of MHCD discharges, which have positive slopes in the low current (Townsend) mode and in the abnormal glow mode, stable arrays of MHCD discharges in argon and xenon could be generated in these current ranges without ballasting each MHCD separately. In the Townsend range, these arrays could be operated up to pressures of 400 Torr. In the abnormal glow mode, discharge arrays were found to be stable up to atmospheric pressure. By using semi-insulating silicon as the anode material, the stable operation of MHCD arrays could be extended to the current range with constant voltage (normal glow) and also that with negative differential conductance (hollow cathode discharge region). Experiments with a cathode geometry without microholes, i.e. excluding the hollow cathode phase, revealed that stable operation of discharges over an extended area were possible. The discharge structure in this configuration reduces to only the cathode fall and negative glow, with the negative glow plasma serving to conduct the discharge current radially to the circular anode. With decreasing current, a transition from homogenous plasma to self-organized plasma filaments is observed. Array formation was not only studied with discharges in parallel, but also with MHCD discharges in series. By using a sandwich electrode configuration, a tandem discharge was

  2. Sampled Longest Common Prefix Array

    NASA Astrophysics Data System (ADS)

    Sirén, Jouni

    When augmented with the longest common prefix (LCP) array and some other structures, the suffix array can solve many string processing problems in optimal time and space. A compressed representation of the LCP array is also one of the main building blocks in many compressed suffix tree proposals. In this paper, we describe a new compressed LCP representation: the sampled LCP array. We show that when used with a compressed suffix array (CSA), the sampled LCP array often offers better time/space trade-offs than the existing alternatives. We also show how to construct the compressed representations of the LCP array directly from a CSA.

  3. COMPONENTS OF LASER SYSTEMS AND PROCESSES OCCURRING IN THEM: Properties of an array of phase-locked CO2 lasers

    NASA Astrophysics Data System (ADS)

    Kachurin, O. R.; Lebedev, F. V.; Napartovich, A. P.

    1988-09-01

    An experimental investigation was made of the emission characteristics of a coherently operating array of waveguide CO2 lasers. The lasers were phase locked by self-reproduction of periodic light fields. Three supermodes of the array were found to exist, an investigation was made of the radiation power distribution over the aperture as a function of the active medium pumping level, and the efficiency in the coherent lasing regime was determined.

  4. A vehicle mounted multi-sensor array for waste site characterization

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.; Tunnell, T.W.

    1995-02-01

    Personnel at AlliedSignal Aerospace, Kirtland Operations (formerly EG&G Energy Measurements, Kirtland Operations) and EG&G Energy Measurements, Los Alamos Operations, have successfully developed and demonstrated a number of technologies which can be applied to the environmental remediation and waste management problem. These applications have included the development of self-contained and towed remote sensing platforms and advanced signal analysis techniques for the detection and characterization of subsurface features. This presentation will provide a brief overview of applications that have been and are currently being fielded by both AlliedSignal and EG&G Energy Measurements personnel and will describe some of the ways that such technologies can and are being used for the detection and characterization of hazardous waste sites.

  5. Simultaneous and automated monitoring of the multimetal biosorption processes by potentiometric sensor array and artificial neural network.

    PubMed

    Wilson, D; del Valle, M; Alegret, S; Valderrama, C; Florido, A

    2013-09-30

    In this communication, a new methodology for the simultaneous and automated monitoring of biosorption processes of multimetal mixtures of polluting heavy metals on vegetable wastes based on flow-injection potentiometry (FIP) and electronic tongue detection (ET) is presented. A fixed-bed column filled with grape stalks from wine industry wastes is used as the biosorption setup to remove the metal mixtures from the influent solution. The monitoring system consists in a computer controlled-FIP prototype with the ET based on an array of 9 flow-through ion-selective electrodes and electrodes with generic response to divalent ions placed in series, plus an artificial neural network response model. The cross-response to Cu(2+), Cd(2+), Zn(2+), Pb(2+) and Ca(2+) (as target ions) is used, and only when dynamic treatment of the kinetic components of the transient signal is incorporated, a correct operation of the system is achieved. For this purpose, the FIA peaks are transformed via use of Fourier treatment, and selected coefficients are used to feed an artificial neural network response model. Real-time monitoring of different binary (Cu(2+)/ Pb(2+)), (Cu(2+)/ Zn(2+)) and ternary mixtures (Cu(2+)/ Pb(2+)/ Zn(2+)), (Cu(2+)/ Zn(2+)/ Cd(2+)), simultaneous to the release of Ca(2+) in the effluent solution, are achieved satisfactorily using the reported system, obtaining the corresponding breakthrough curves, and showing the ion-exchange mechanism among the different metals. Analytical performance is verified against conventional spectroscopic techniques, with good concordance of the obtained breakthrough curves and modeled adsorption parameters.

  6. Imaging Rupture Process of the 2015 Mw 8.3 Illapel Earthquake Using the US Seismic Array

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ghosh, Abhijit

    2016-07-01

    We study the rupture process of the Mw 8.3 Illapel, Chile earthquake that occurred on 16 September 2015 using the US seismic network as a large aperture array. We apply the back-projection technique using two frequency bands, 0.1-0.5 and 0.25-1 Hz. Both frequency bands reveal that this event is characterized by rupture of three patches. The higher frequency band shows an earlier burst of seismic radiation and illuminates a relatively down-dip patch of energy radiation. On the other hand, the lower frequency band shows a more up-dip rupture and matches well with the slip inversion model in other studies. The Illapel earthquake ruptures about 100-km along-strike, and shows 40-km up-dip and 40-km down-dip movement along the subduction megathrust fault. The earthquake first ruptures around the epicenter with a relatively low level of seismic radiation. Then, it propagates northeast along the Juan Femandez Ridge (JFR) to rupture a patch down-dip accompanied by strong higher frequency seismic radiation. Finally, it ruptures to the northwest of the epicenter and terminates south of the Challenger fracture zone (CFZ), releasing a burst of strong lower frequency seismic radiation. Most of the aftershocks are either within or at the edge of the rupture patch, a region characterized by high coupling in central Chile. The rupture is bounded along-strike by two fractures zones to the north and south. The JFR to the south of the rupture zone may have acted as a barrier along-strike, leaving the area south of the mainshock vulnerable for a large damaging earthquake in the near future.

  7. Simultaneous and automated monitoring of the multimetal biosorption processes by potentiometric sensor array and artificial neural network.

    PubMed

    Wilson, D; del Valle, M; Alegret, S; Valderrama, C; Florido, A

    2013-09-30

    In this communication, a new methodology for the simultaneous and automated monitoring of biosorption processes of multimetal mixtures of polluting heavy metals on vegetable wastes based on flow-injection potentiometry (FIP) and electronic tongue detection (ET) is presented. A fixed-bed column filled with grape stalks from wine industry wastes is used as the biosorption setup to remove the metal mixtures from the influent solution. The monitoring system consists in a computer controlled-FIP prototype with the ET based on an array of 9 flow-through ion-selective electrodes and electrodes with generic response to divalent ions placed in series, plus an artificial neural network response model. The cross-response to Cu(2+), Cd(2+), Zn(2+), Pb(2+) and Ca(2+) (as target ions) is used, and only when dynamic treatment of the kinetic components of the transient signal is incorporated, a correct operation of the system is achieved. For this purpose, the FIA peaks are transformed via use of Fourier treatment, and selected coefficients are used to feed an artificial neural network response model. Real-time monitoring of different binary (Cu(2+)/ Pb(2+)), (Cu(2+)/ Zn(2+)) and ternary mixtures (Cu(2+)/ Pb(2+)/ Zn(2+)), (Cu(2+)/ Zn(2+)/ Cd(2+)), simultaneous to the release of Ca(2+) in the effluent solution, are achieved satisfactorily using the reported system, obtaining the corresponding breakthrough curves, and showing the ion-exchange mechanism among the different metals. Analytical performance is verified against conventional spectroscopic techniques, with good concordance of the obtained breakthrough curves and modeled adsorption parameters. PMID:23953435

  8. Multi-Sensor Observations of Earthquake Related Atmospheric Signals over Major Geohazard Validation Sites

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Davindenko, D.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, ionospheric electron density, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.

  9. Multisensor data fusion implementation within a distributed command and control system

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa

    1992-07-01

    The naval forces will encounter air, surface, underwater, electro-optic/infrared (EO/IR), communications, radar, electronic warfare, etc., threats. Technological advancements of future threats to the navy will place heavy demands (quicker reaction to faster, stealth threats) upon the ability to process and interpret tactical data provided by multiple and often dissimilar sensors. This emphasizes the need for a naval platform employing an automated distributed command and control system (CCS) which includes a multi-sensor data fusion (MSDF) function to increase probability of mission success facing the threats of the future. The main advantage of a distributed CCS is redundancy and reconfigurability resulting in a high degree of survivability and flexibility while accomplishing the mission. The MSDF function provides the combat system with a capability to analyze sensor data from multiple sensors and derive contact/track solutions, which would not be derived by the individual sensors. The command and control (C2) functions, including the MSDF function, operate within a number of general purpose C2 processors, communicating with each other and the sensor systems via a high speed data bus. Different sensors are more effective in different environmental conditions and for different geometrical parameters (elevation, distance, bearing, etc.). The MSDF function combines the capabilities of all the sensors providing the operators and other CCS functions with more accurate solutions faster than each sensor system operating alone. An architecture of a distributed CCS using an MSDF function to increase the probability of mission success of a naval platform is presented.

  10. Multi-sensor observations of earthquake related atmospheric signals over major geohazard validation sites

    NASA Astrophysics Data System (ADS)

    Ouzounov, D. P.; Pulinets, S. A.; Davidenko, D.; Hattori, K.; Kafatos, M.; Taylor, P. T.

    2012-12-01

    We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, density of ionospheric electrons, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.

  11. Multi-sensor fusion techniques for state estimation of micro air vehicles

    NASA Astrophysics Data System (ADS)

    Donavanik, Daniel; Hardt-Stremayr, Alexander; Gremillion, Gregory; Weiss, Stephan; Nothwang, William

    2016-05-01

    Aggressive flight of micro air vehicles (MAVs) in unstructured, GPS-denied environments poses unique challenges for estimation of vehicle pose and velocity due to the noise, delay, and drift in individual sensor measurements. Maneuvering flight at speeds in excess of 5 m/s poses additional challenges even for active range sensors; in the case of LIDAR, an assembled scan of the vehicles environment will in most cases be obsolete by the time it is processed. Multi-sensor fusion techniques which combine inertial measurements with passive vision techniques and/or LIDAR have achieved breakthroughs in the ability to maintain accurate state estimates without the use of external positioning sensors. In this paper, we survey algorithmic approaches to exploiting sensors with a wide range of nonlinear dynamics using filter and bundle-adjustment based approaches for state estimation and optimal control. From this foundation, we propose a biologically-inspired framework for incorporating the human operator in the loop as a privileged sensor in a combined human/autonomy paradigm.

  12. 3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  13. A multi-sensor remote sensing approach for measuring primary production from space

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine

    1989-01-01

    It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.

  14. a Meteorological Risk Assessment Method for Power Lines Based on GIS and Multi-Sensor Integration

    NASA Astrophysics Data System (ADS)

    Lin, Zhiyong; Xu, Zhimin

    2016-06-01

    Power lines, exposed in the natural environment, are vulnerable to various kinds of meteorological factors. Traditional research mainly deals with the influence of a single meteorological condition on the power line, which lacks of comprehensive effects evaluation and analysis of the multiple meteorological factors. In this paper, we use multiple meteorological monitoring data obtained by multi-sensors to implement the meteorological risk assessment and early warning of power lines. Firstly, we generate meteorological raster map from discrete meteorological monitoring data using spatial interpolation. Secondly, the expert scoring based analytic hierarchy process is used to compute the power line risk index of all kinds of meteorological conditions and establish the mathematical model of meteorological risk. By adopting this model in raster calculator of ArcGIS, we will have a raster map showing overall meteorological risks for power line. Finally, by overlaying the power line buffer layer to that raster map, we will get to know the exact risk index around a certain part of power line, which will provide significant guidance for power line risk management. In the experiment, based on five kinds of observation data gathered from meteorological stations in Guizhou Province of China, including wind, lightning, rain, ice, temperature, we carry on the meteorological risk analysis for the real power lines, and experimental results have proved the feasibility and validity of our proposed method.

  15. Kokkos Array

    SciTech Connect

    Edwards Daniel Sunderland, Harold Carter

    2012-09-12

    The Kokkos Array library implements shared-memory array data structures and parallel task dispatch interfaces for data-parallel computational kernels that are performance-portable to multicore-CPU and manycore-accelerator (e.g., GPGPU) devices.

  16. Imaging of Large Earthquake Rupture Processes Using Multiple Teleseismic Arrays: Application to the Sumatra-Andaman Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Ohrnberger, M.; Krüger, F.

    2005-12-01

    The spatial extent of large earthquake ruptures is usually indirectly inferred from aftershock distributions or by waveform inversion techniques. In this work we present a method which allows the direct estimation of the spatio-temporal characteristics of large earthquake rupture processes. The technique exploits the high-quality records from the stations of the global broadband network using a simple, yet efficient, migration technique. In particular, we combine coherency and beam-power measures which are obtained from curved wavefront stacking of the direct P wave at multiple large aperture arrays surrounding the source region at tele-seismic distances. Applying this method to the Mw=9.3 Sumatra earthquake from 26/12/2004 and the subsequent Nias earthquake from 28/03/2005 (Mw=8.7), we show that it is possible to track the focus of the most coherent/largest energy release in space and time. For the Sumatra event, we confirm the overall extent of the rupture length being in the order of 1150 km. The rupture front propagated during a time span of at least 480-500 s following the trench geometry from the northern tip of Sumatra to the Andaman Islands region. A visualization of the coherent energy accumulation over time suggests the existence of slow after-slip in the northern part of the rupture after the main rupture front has passed. However, due to the interference of large later phases it is not possible to determine whether this afterslipping event persists much longer then the overall duration of the rupture. The final areal estimate of cumulative energy release is in full agreement with the aftershock distribution observed in the months following this earthquake. Including a number of additional seismic phases (e.g. pP, sP) into the migration scheme, it seems for this event feasible to constrain the depth extent of the rupture. For the Nias earthquake we observe unilateral propagation of the rupture in south-eastern direction starting from an area south

  17. Systolic arrays

    SciTech Connect

    Moore, W.R.; McCabe, A.P.H.; Vrquhart, R.B.

    1987-01-01

    Selected Contents of this book are: Efficient Systolic Arrays for the Solution of Toeplitz Systems, The Derivation and Utilization of Bit Level Systolic Array Architectures, an Efficient Systolic Array for Distance Computation Required in a Video-Codec Based Motion-Detection, On Realizations of Least-Squares Estimation and Kalman Filtering by Systolic Arrays, and Comparison of Systolic and SIMD Architectures for Computer Vision Computations.

  18. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2009-08-11

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  19. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2007-03-13

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  20. Daily life event segmentation for lifestyle evaluation based on multi-sensor data recorded by a wearable device.

    PubMed

    Li, Zhen; Wei, Zhiqiang; Jia, Wenyan; Sun, Mingui

    2013-01-01

    In order to evaluate people's lifestyle for health maintenance, this paper presents a segmentation method based on multi-sensor data recorded by a wearable computer called eButton. This device is capable of recording more than ten hours of data continuously each day in multimedia forms. Automatic processing of the recorded data is a significant task. We have developed a two-step summarization method to segment large datasets automatically. At the first step, motion sensor signals are utilized to obtain candidate boundaries between different daily activities in the data. Then, visual features are extracted from images to determine final activity boundaries. It was found that some simple signal measures such as the combination of a standard deviation measure of the gyroscope sensor data at the first step and an image HSV histogram feature at the second step produces satisfactory results in automatic daily life event segmentation. This finding was verified by our experimental results. PMID:24110323

  1. Multi-sensor Testing for Automated Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Carrington, Connie K.

    2008-01-01

    During the past two years, many sensors have been tested in an open-loop fashion in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL) to both determine their suitability for use in Automated Rendezvous and Docking (AR&D) systems and to ensure the test facility is prepared for future multi-sensor testing. The primary focus of this work was in support of the CEV AR&D system, because the AR&D sensor technology area was identified as one of the top risks in the program. In 2006, four different sensors were tested individually or in a pair in the MSFC FRL. In 2007, four sensors, two each of two different types, were tested simultaneously. In each set of tests, the target was moved through a series of pre-planned trajectories while the sensor tracked it. In addition, a laser tracker "truth" sensor also measured the target motion. The tests demonstrated the functionality of testing four sensors simultaneously as well as the capabilities (both good and bad) of all of the different sensors tested. This paper outlines the test setup and conditions, briefly describes the facility, summarizes the earlier results of the individual sensor tests, and describes in some detail the results of the four-sensor testing. Post-test analysis includes data fusion by minimum variance estimation and sequential Kalman filtering. This Sensor Technology Project work was funded by NASA's Exploration Technology Development Program.

  2. Irma 5.2 multi-sensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2007-04-01

    The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after an extensive verification and validation of an upgraded and reengineered active channel. Since 2005, the reengineering effort has focused on the Irma passive channel. Field measurements for the validation effort include the unpolarized data collection. Irma 5.2 is scheduled for release in the summer of 2007. This paper will report the validation test results of the Irma passive models and discuss the new features in Irma 5.2.

  3. Irma 5.2 multi-sensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Pau, John

    2008-04-01

    The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/RW) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after extensive verification and validation of an upgraded and reengineered ladar channel. The reengineering effort then shifted focus to the Irma passive channel. Field measurements for the validation effort include both polarized and unpolarized data collection. Irma 5.2 was released in 2007 with a reengineered passive channel. This paper summarizes the capabilities of Irma and the progress toward Irma 5.3, which includes a reengineered radar channel.

  4. Evaluating fusion techniques for multi-sensor satellite image data

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data. Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.

  5. Reliable sources and uncertain decisions in multisensor systems

    NASA Astrophysics Data System (ADS)

    Minor, Christian; Johnson, Kevin

    2015-05-01

    Conflict among information sources is a feature of fused multisource and multisensor systems. Accordingly, the subject of conflict resolution has a long history in the literature of data fusion algorithms such as that of Dempster-Shafer theory (DS). Most conflict resolution strategies focus on distributing the conflict among the elements of the frame of discernment (the set of hypotheses that describe the possible decisions for which evidence is obtained) through rescaling of the evidence. These "closed-world" strategies imply that conflict is due to the uncertainty in evidence sources stemming from their reliability. An alternative approach is the "open-world" hypothesis, which allows for the presence of "unknown" elements not included in the original frame of discernment. Here, conflict must be considered as a result of uncertainty in the frame of the discernment, rather than solely the province of evidence sources. Uncertainty in the operating environment of a fused system is likely to appear as an open-world scenario. Understanding the origin of conflict (source versus frame of discernment uncertainty) is a challenging area for research in fused systems. Determining the ratio of these uncertainties provides useful insights into the operation of fused systems and confidence in their decisions for a variety of operating environments. Results and discussion for the computation of these uncertainties are presented for several combination rules with simulated data sets.

  6. The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the multisensor coordinated measurement of atmospheric and oceanographic conditions.

    PubMed

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39'N, 2°54'E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the coordinated

  7. The New Pelagic Operational Observatory of the Catalan Sea (OOCS) for the Multisensor Coordinated Measurement of Atmospheric and Oceanographic Conditions

    PubMed Central

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39′N, 2°54′E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the

  8. The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the multisensor coordinated measurement of atmospheric and oceanographic conditions.

    PubMed

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39'N, 2°54'E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the coordinated

  9. Large-Scale Precise Printing of Ultrathin Sol-Gel Oxide Dielectrics for Directly Patterned Solution-Processed Metal Oxide Transistor Arrays.

    PubMed

    Lee, Won-June; Park, Won-Tae; Park, Sungjun; Sung, Sujin; Noh, Yong-Young; Yoon, Myung-Han

    2015-09-01

    Ultrathin and dense metal oxide gate di-electric layers are reported by a simple printing of AlOx and HfOx sol-gel precursors. Large-area printed indium gallium zinc oxide (IGZO) thin-film transistor arrays, which exhibit mobilities >5 cm(2) V(-1) s(-1) and gate leakage current of 10(-9) A cm(-2) at a very low operation voltage of 2 V, are demonstrated by continuous simple bar-coated processes. PMID:26222338

  10. Formation of an array of ordered nanocathodes based on carbon nanotubes by nanoimprint lithography and PECVD processes

    SciTech Connect

    Gromov, D. G.; Shulyat’ev, A. S. Egorkin, V. I.; Zaitsev, A. A.; Skorik, S. N.; Galperin, V. A.; Pavlov, A. A.; Shamanaev, A. A.

    2014-12-15

    Technology for the production of an array of ordered nanoemitters based on carbon nanotubes is developed. The technological parameters of the fabrication of carbon nanotubes are chosen. It is shown that the structures produced exhibit field electron emission with an emission current of 8 μA and a threshold voltage of 80 V.

  11. From GaN to ZnGa(2)O(4) through a low-temperature process: nanotube and heterostructure arrays.

    PubMed

    Lu, Ming-Yen; Zhou, Xiang; Chiu, Cheng-Yao; Crawford, Samuel; Gradečak, Silvija

    2014-01-22

    We demonstrate a method to synthesize GaN-ZnGa2O4 core-shell nanowire and ZnGa2O4 nanotube arrays by a low-temperature hydrothermal process using GaN nanowires as templates. Transmission electron microscopy and X-ray photoelectron spectroscopy results show that a ZnGa2O4 shell forms on the surface of GaN nanowires and that the shell thickness is controlled by the time of the hydrothermal process and thus the concentration of Zn ions in the solution. Furthermore, ZnGa2O4 nanotube arrays were obtained by depleting the GaN core from GaN-ZnGa2O4 core-shell nanowire arrays during the reaction and subsequent etching with HCl. The GaN-ZnGa2O4 core-shell nanowires exhibit photoluminescence peaks centered at 2.60 and 2.90 eV attributed to the ZnGa2O4 shell, as well as peaks centered at 3.35 and 3.50 eV corresponding to the GaN core. We also demonstrate the synthesis of GaN-ZnGa2O4 heterojunction nanowires by a selective formation process as a simple route toward development of heterojunction nanodevices for optoelectronic applications.

  12. Statistical generation of training sets for measuring NO3(-), NH4(+) and major ions in natural waters using an ion selective electrode array.

    PubMed

    Mueller, Amy V; Hemond, Harold F

    2016-05-18

    Knowledge of ionic concentrations in natural waters is essential to understand watershed processes. Inorganic nitrogen, in the form of nitrate and ammonium ions, is a key nutrient as well as a participant in redox, acid-base, and photochemical processes of natural waters, leading to spatiotemporal patterns of ion concentrations at scales as small as meters or hours. Current options for measurement in situ are costly, relying primarily on instruments adapted from laboratory methods (e.g., colorimetric, UV absorption); free-standing and inexpensive ISE sensors for NO3(-) and NH4(+) could be attractive alternatives if interferences from other constituents were overcome. Multi-sensor arrays, coupled with appropriate non-linear signal processing, offer promise in this capacity but have not yet successfully achieved signal separation for NO3(-) and NH4(+)in situ at naturally occurring levels in unprocessed water samples. A novel signal processor, underpinned by an appropriate sensor array, is proposed that overcomes previous limitations by explicitly integrating basic chemical constraints (e.g., charge balance). This work further presents a rationalized process for the development of such in situ instrumentation for NO3(-) and NH4(+), including a statistical-modeling strategy for instrument design, training/calibration, and validation. Statistical analysis reveals that historical concentrations of major ionic constituents in natural waters across New England strongly covary and are multi-modal. This informs the design of a statistically appropriate training set, suggesting that the strong covariance of constituents across environmental samples can be exploited through appropriate signal processing mechanisms to further improve estimates of minor constituents. Two artificial neural network architectures, one expanded to incorporate knowledge of basic chemical constraints, were tested to process outputs of a multi-sensor array, trained using datasets of varying degrees of

  13. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  14. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Hua, H.

    2012-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  15. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  16. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities`` lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ``detail enhancement,`` wherein the relative information content of the original images is less rich than the desired representation; (2) ``data enhancement,`` wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ``conceptual enhancement,`` wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  17. BreedVision--a multi-sensor platform for non-destructive field-based phenotyping in plant breeding.

    PubMed

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-02-27

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies.

  18. BreedVision--a multi-sensor platform for non-destructive field-based phenotyping in plant breeding.

    PubMed

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  19. Automated calibration methods for robotic multisensor landmine detection

    NASA Astrophysics Data System (ADS)

    Keranen, Joe G.; Miller, Jonathan; Schultz, Gregory; Topolosky, Zeke

    2007-04-01

    Both force protection and humanitarian demining missions require efficient and reliable detection and discrimination of buried anti-tank and anti-personnel landmines. Widely varying surface and subsurface conditions, mine types and placement, as well as environmental regimes challenge the robustness of the automatic target recognition process. In this paper we present applications created for the U.S. Army Nemesis detection platform. Nemesis is an unmanned rubber-tracked vehicle-based system designed to eradicate a wide variety of anti-tank and anti-personnel landmines for humanitarian demining missions. The detection system integrates advanced ground penetrating synthetic aperture radar (GPSAR) and electromagnetic induction (EMI) arrays, highly accurate global and local positioning, and on-board target detection/classification software on the front loader of a semi-autonomous UGV. An automated procedure is developed to estimate the soil's dielectric constant using surface reflections from the ground penetrating radar. The results have implications not only for calibration of system data acquisition parameters, but also for user awareness and tuning of automatic target recognition detection and discrimination algorithms.

  20. Multi-Sensor Analysis of Overshooting Tops in Tornadic Storms

    NASA Astrophysics Data System (ADS)

    Magee, N. B.; Goldberg, R.; Hartline, M.

    2012-12-01

    The disastrous 2011 tornado season focused much attention on the ~75% false alarm rate for NWS-issued tornado warnings. Warnings are correctly issued on ~80% of verified tornados, but the false alarm rate has plateaued at near 75%. Any additional clues that may signal tornadogenesis would be of great benefit to the public welfare. We have performed statistical analyses of the structure and time-evolution of convective overshooting tops for tornadic storms occurring in the continental United States since 2006. An amalgam of case studies and theory has long suggested that overshooting tops may often collapse just prior to the onset of tornado touchdown. Our new results suggest that this view is supported by a broad set of new statistical evidence. Our approach to the analysis makes use of a high resolution, multi-sensor data set, and seeks to gather statistics on a large set of storms. Records of 88-D NEXRAD radar Enhanced-Resolution Echo Tops (product available since 2009) have been analyzed for an hour prior to and following touchdown of all EF1 and stronger storms. In addition, a coincidence search has been performed for the NASA A-Train satellite suite and tornadic events since 2006. Although the paths of the polar-orbiting satellites do not aid in analyses of temporal storm-top evolution, Aqua-MODIS, CALIPSO, and Cloud-Sat have provided a detailed structural picture of overshooting tops in tornadic and non-tornadic supercell thunderstorms. 250 m resolution AQUA-MODIS image at 1950Z on 4/27/2011, color-enhanced to emphasize overshooting tops during tornado outbreak.

  1. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    NASA Astrophysics Data System (ADS)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  2. Tantalum (oxy)nitrides nanotube arrays for the degradation of atrazine in vis-Fenton-like process.

    PubMed

    Du, Yingxun; Zhao, Lu; Chang, Yuguang; Su, Yaling

    2012-07-30

    In order to overcome the limitation of the application of nanoparticles, tantalum (oxy)nitrides nanotube arrays on a Ta foil were synthesized and introduced in vis (visible light)-Fenton-like system to enhance the degradation of atrazine. At first, the anodization of tantalum foil in a mild electrolyte solution containing ethylene glycol and water (v:v=2:1) plus 0.5wt.% NH(4)F produced tantala nanotubes with an average diameter of 30nm and a length of approximately 1μm. Then the nitridation of tantala nanotube arrays resulted in the replacement of N atoms to O atoms to form tantalum (oxy)nitrides (TaON and Ta(3)N(5)), as testified by XRD and XPS analyses. The synthesized tantalum (oxy)nitrides nanotubes absorb well in the visible region up to 600nm. Under visible light, tantalum (oxy)nitrides nanotube arrays were catalytically active for Fe(3+) reduction. With tantalum (oxy)nitrides nanotube arrays, the degradation of atrazine and the formation of the intermediates in vis/Fe(3+)/H(2)O(2) system were significantly accelerated. This was explained by the higher concentration of Fe(2+) and thus the faster decomposition of H(2)O(2) with tantalum (oxy)nitrides nanotubes. In addition, tantalum (oxy)nitrides nanotubes exhibited stable performance during atrazine degradation for three runs. The good performance and stability of the tantalum (oxy)nitrides nanotubes film with the convenient separation, suggest that this film is a promising catalyst for vis-Fenton-like degradation.

  3. Non-invasive continuous glucose monitoring with multi-sensor systems: a Monte Carlo-based methodology for assessing calibration robustness.

    PubMed

    Zanon, Mattia; Sparacino, Giovanni; Facchinetti, Andrea; Talary, Mark S; Mueller, Martin; Caduff, Andreas; Cobelli, Claudio

    2013-06-03

    In diabetes research, non-invasive continuous glucose monitoring (NI-CGM) devices represent a new and appealing frontier. In the last years, some multi-sensor devices for NI-CGM have been proposed, which exploit several sensors measuring phenomena of different nature, not only for measuring glucose related signals, but also signals reflecting some possible perturbing processes (temperature, blood perfusion). Estimation of glucose levels is then obtained combining these signals through a mathematical model which requires an initial calibration step exploiting one reference blood glucose (RBG) sample. Even if promising results have been obtained, especially in hospitalized volunteers, at present the temporal accuracy of NI-CGM sensors may suffer because of environmental and physiological interferences. The aim of this work is to develop a general methodology, based on Monte Carlo (MC) simulation, to assess the robustness of the calibration step used by NI-CGM devices against these disturbances. The proposed methodology is illustrated considering two examples: the first concerns the possible detrimental influence of sweat events, while the second deals with calibration scheduling. For implementing both examples, 45 datasets collected by the Solianis Multisensor system are considered. In the first example, the MC methodology suggests that no further calibration adjustments are needed after the occurrence of sweat events, because the "Multisensor+model" system is able to deal with the disturbance. The second case study shows how to identify the best time interval to update the model's calibration for improving the accuracy of the estimated glucose. The methodology proposed in this work is of general applicability and can be helpful in making those incremental steps in NI-CGM devices development needed to further improve their performance.

  4. Integration of fiber-optic sensor arrays into a multi-modal tactile sensor processing system for robotic end-effectors.

    PubMed

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  5. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  6. Integration of fiber-optic sensor arrays into a multi-modal tactile sensor processing system for robotic end-effectors.

    PubMed

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  7. Detection of multiple airborne targets from multisensor data

    NASA Astrophysics Data System (ADS)

    Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf

    1995-08-01

    Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.

  8. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  9. The M3A multi-sensor buoy network of the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Nittis, K.; Tziavos, C.; Bozzano, R.; Cardin, V.; Thanos, Y.; Petihakis, G.; Schiano, M. E.; Zanon, F.

    2007-05-01

    A network of three multi-sensor timeseries stations able to deliver real time physical and biochemical observations of the upper thermocline has been developed for the needs of the Mediterranean Forecasting System during the MFSTEP project. They follow the experience of the prototype M3A system that was developed during the MFSPP project and has been tested during a pilot pre-operational period of 22 months (2000-2001). The systems integrate sensors for physical (temperature, salinity, turbidity, current speed and direction) as well as optical and chemical observations (dissolved oxygen, chlorophyll-a, PAR, nitrate). The south Aegean system (E1-M3A) follows a modular design using independent mooring lines and collects biochemical data in the upper 100 m and physical data in the upper 500 m of the water column. The south Adriatic buoy system (E2-M3A) uses similar instrumentation but on a single mooring line and also tests a new method of pumping water samples from relatively deep layers, performing analysis in the protected "dry" environment of the buoy interior. The Ligurian Sea system (W1-M3A) is an ideal platform for air-sea interaction processes since it hosts a large number of meteorological sensors while its ocean instrumentation, with real time transmission capabilities, is confined in the upper 50 m layer. Despite their different architecture, the three systems have common sampling strategy, quality control and data management procedures. The network operates in the Mediterranean Sea since autumn 2004 collecting timeseries data for calibration and validation of the forecasting system as well for process studies of regional dynamics.

  10. The M3A multi-sensor buoy network of the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Nittis, K.; Tziavos, C.; Bozzano, R.; Cardin, V.; Thanos, Y.; Petihakis, G.; Schiano, M. E.; Zanon, F.

    2006-08-01

    A network of three multi-sensor timeseries stations able to deliver real time physical and biochemical observations of the upper thermocline has been developed for the needs of the Mediterranean Forecasting System during the MFSTEP project. They follow the experience of the prototype M3A system that was developed during the MFSPP project and has been tested during a pilot pre-operational period of 22 months (2000-2001). The systems integrate sensors for physical (temperature, salinity, turbidity, current speed and direction) as well as optical and chemical observations (dissolved oxygen, chlorophyll-a, PAR, nitrate). The south Aegean system (E1-M3A) follows a modular design using independent mooring lines and collects biochemical data in the upper 100 m and physical data in the upper 500 m of the water column. The south Adriatic buoy system (E2-M3A) uses similar instrumentation but on a single mooring line and also tests a new method of pumping water samples from relatively deep layers, performing analysis in the protected ''dry'' environment of the buoy interior. The Ligurian Sea system (W1-M3A) is an ideal platform for air-sea interaction processes since it hosts a large number of meteorological sensors while its ocean instrumentation, with real time transmission capabilities, is confined in the upper 50 m layer. Despite their different architecture, the three systems have common sampling strategy, quality control and data management procedures. The network operates in the Mediterranean Sea since autumn 2004 collecting timeseries data for calibration and validation of the forecasting system as well for process studies of regional dynamics.

  11. Laboratory evaluation of dual-frequency multisensor capacitance probes to monitor soil water and salinity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Real-time information on salinity levels and transport of fertilizers are generally missing from soil profile knowledge bases. A dual-frequency multisensor capacitance probe (MCP) is now commercially available for sandy soils that simultaneously monitor volumetric soil water content (VWC, ') and sa...

  12. Synthesis and study of transparent multicomponent metal oxide for use in multisensor system

    NASA Astrophysics Data System (ADS)

    Abrashova, E. V.; Moshnikov, V. A.; Maraeva, E. V.; Kononova, I. E.; Vorob'ev, D. M.

    2016-08-01

    The thin films on the basis of metal oxides of SnO2-ZnO-SiO2 were prepared by sol-gel techniques, and CVD. During the experiment optical and morphological characteristics of materials were investigated. According to the study we can say that this system is promising for use in transparent multisensor devices.

  13. How Well Do Data from Multisensor Capacitance Probes Represent Plot-Scale-Average Soil Water Contents?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multisensor capacitance probes have shown great promises in irrigation scheduling, evaluating water needs of plants, estimating soil hydraulic properties, estimating groundwater recharge and infiltration losses, and other soil water-related fields. It is often beneficial to know how representative a...

  14. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  15. Design of 3D measurement system based on multi-sensor data fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Han, Jun; Yu, Xun

    2009-05-01

    With the rapid development of shape measurement technique, multi-sensor approach becomes one of valid way to improve the accuracy, to expend measuring range, to reduce occlusion, to realize multi-resolution measurement, and to increase measuring speed simultaneously. Sensors in multi-sensor system can have different system parameters, and they may have different measuring range and different precision. Light sectioning method is one of useful measurement technique for 3D profile measurement. It is insensitive to the surface optical property of 3D object, has scarcely any demand on surrounding. A multi-sensor system scheme, which uses light sectioning method and multi-sensor data fusion techniques, is presented for blade of aviation engine and spiral bevel gear measurement. The system model is developed to build the relationship between measuring range & precision and system parameters. The system parameters were set according to system error analysis, measuring range and precision. The result shows that the system is more universal than it's ancestor, and that the accuracy of the system is about 0.05mm for the 60× 60mm2 measuring range, and that the system is successful for the aero-dynamical data curve of blade of aviation engine and tooth profile of spiral bevel gear measurement with 3600 multi-resolution measuring character.

  16. A high speed networked signal processing platform for multi-element radio telescopes

    NASA Astrophysics Data System (ADS)

    Prasad, Peeyush; Subrahmanya, C. R.

    2011-08-01

    A new architecture is presented for a Networked Signal Processing System (NSPS) suitable for handling the real-time signal processing of multi-element radio telescopes. In this system, a multi-element radio telescope is viewed as an application of a multi-sensor, data fusion problem which can be decomposed into a general set of computing and network components for which a practical and scalable architecture is enabled by current technology. The need for such a system arose in the context of an ongoing program for reconfiguring the Ooty Radio Telescope (ORT) as a programmable 264-element array, which will enable several new observing capabilities for large scale surveys on this mature telescope. For this application, it is necessary to manage, route and combine large volumes of data whose real-time collation requires large I/O bandwidths to be sustained. Since these are general requirements of many multi-sensor fusion applications, we first describe the basic architecture of the NSPS in terms of a Fusion Tree before elaborating on its application for the ORT. The paper addresses issues relating to high speed distributed data acquisition, Field Programmable Gate Array (FPGA) based peer-to-peer networks supporting significant on-the fly processing while routing, and providing a last mile interface to a typical commodity network like Gigabit Ethernet. The system is fundamentally a pair of two co-operative networks, among which one is part of a commodity high performance computer cluster and the other is based on Commercial-Off The-Shelf (COTS) technology with support from software/firmware components in the public domain.

  17. The research of auto-focusing method for the image mosaic and fusion system with multi-sensor

    NASA Astrophysics Data System (ADS)

    Pang, Ke; Yao, Suying; Shi, Zaifeng; Xu, Jiangtao; Liu, Jiangming

    2013-09-01

    In modern image processing, due to the development of digital image processing, the focus of the sensor can be automatically set by the digital processing system through computation. In the other hand, the auto-focusing synchronously and consistently is one of the most important factors for image mosaic and fusion processing, especially for the system with multi-sensor which are put on one line in order to gain the wide angle video information. Different images sampled by the sensors with different focal length values will always increase the complexity of the affine matrix of the image mosaic and fusion in next, which potentially reducing the efficiency of the system and consuming more power. Here, a new fast evaluation method based on the gray value variance of the image pixel is proposed to find the common focal length value for all sensors to achieve the better image sharpness. For the multi-frame pictures that are sampled from different sensors that have been adjusted and been regarded as time synchronization, the gray value variances of the adjacent pixels are determined to generate one curve. This curve is the focus measure function which describes the relationship between the image sharpness and the focal length value of the sensor. On the basis of all focus measure functions of all sensors in the image processing system, this paper uses least square method to carry out the data fitting to imitate the disperse curves and give one objective function for the multi-sensor system, and then find the optimal solution corresponding to the extreme value of the image sharpness according to the evaluation of the objective function. This optimal focal length value is the common parameter for all sensors in this system. By setting the common focal length value, in the premise of ensuring the image sharpness, the computing of the affine matrix which is the core processing of the image mosaic and fusion which stitching all those pictures into one wide angle image will be

  18. Optical study and ruthenizer (II) N3 dye-sensitized solar cell application of ZnO nanorod-arrays synthesized by combine two-step process

    NASA Astrophysics Data System (ADS)

    Parra, Mohammad Ramzan; Haque, Fozia Z.

    2015-10-01

    Highly dense ZnO nanorod-arrays were successfully synthesized with uniform c-axis growth by using combine two-step process: sol-gel spin coating followed by the aqueous solution growth method. Structural and optical properties of ZnO nanorod-arrays were investigated. The X-ray diffraction results revealed that ZnO nanorod arrays exhibit wurtzite hexagonal crystal structure with a dominant (002) peak with high crystallinity. Nanorods of 3-4 μm length and 500 nm diameter, with surface roughness ˜20 nm were observed. Furthermore, Raman spectroscopic results revealed the presence of E 2 peak ˜438 cm-1 which again corroborated the existence of wurtzite crystal structures assigned to ZnO. The optical transmittance spectrum indicated that the transmittance of more than 80% was observed in the visible and infrared (IR) regions with the optical band-gap energy ˜3.35 eV. Photoluminescence spectrum showed peaks in ultra-violet (382.0 nm) and green region (524.9 nm), which specified good-quality crystallite formation containing high density of surface defects, zinc interstitials and oxygen-vacancies. Ruthenizer (II) N3-dye loaded sensitized solar cell test illustrated that the uniform ZnO nanorod-arrays as working electrode with a short circuit current density of 3.99 mA/cm2, fill factor ˜50% and overall power conversion efficiency (η) ˜1.36% might be a promising electrode material of dye sensitized solar cell application.

  19. Autonomous Multi-Sensor Coordination: The Science Goal Monitor

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Hess, Melissa; Jones, Jeremy

    2004-01-01

    Many dramatic earth phenomena are dynamic and coupled. In order to fully understand them, we need to obtain timely coordinated multi-sensor observations from widely dispersed instruments. Such a dynamic observing system must include the ability to Schedule flexibly and react autonomously to sciencehser driven events; Understand higher-level goals of a sciencehser defined campaign; Coordinate various space-based and ground-based resources/sensors effectively and efficiently to achieve goals. In order to capture transient events, such a 'sensor web' system must have an automated reactive capability built into its scientific operations. To do this, we must overcome a number of challenges inherent in infusing autonomy. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe dynamic phenomena. The SGM system enables users to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of data to identify occurrences of the key events previously specified by the scientisther. When an event occurs, the system autonomously coordinates the execution of the users' desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened. Our paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems' Aqua/Terra spacecrafts' MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image

  20. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  1. NDT process using Lamb waves generated/detected by ultrasonic phased array probes for the defect detection in metallic and composite plates

    NASA Astrophysics Data System (ADS)

    Leleux, A.; Micheau, P.; Castaings, M.

    2013-01-01

    One gel-coupled multi-element matrix ultrasonic probe is driven using the phased array principle, for launching/detecting pure Lamb modes in/from different directions along various types of plates, and taking into account the modes frequency and angular dispersive effects. It allows rapid inspection of large structures, from one remote fixed position of the probe. The set-up and principle of the process are presented, as well as its measured performances in terms of modal selectivity and directivity. Finally examples of defect detection are shown.

  2. Large detector array and real-time processing and elemental image projection of X-ray and proton microprobe fluorescence data

    NASA Astrophysics Data System (ADS)

    Ryan, C. G.; Siddons, D. P.; Moorhead, G.; Kirkham, R.; Dunn, P. A.; Dragone, A.; De Geronimo, G.

    2007-07-01

    A detector concept is described that integrates a large solid-angle detector array developed at Brookhaven National Laboratory and a high speed pipelined parallel processing engine developed at CSIRO for machine vision, with an embedded implementation of the Dynamic Analysis method for fluorescence spectra deconvolution and image projection, to yield a detection system capable of energy-dispersive detection, spectral deconvolution and real-time elemental imaging at ˜10 8 events per second for PIXE elemental imaging using the nuclear microprobe and SXRF elemental imaging using the synchrotron X-ray microprobe.

  3. Damage-free top-down processes for fabricating two-dimensional arrays of 7 nm GaAs nanodiscs using bio-templates and neutral beam etching.

    PubMed

    Wang, Xuan-Yu; Huang, Chi-Hsien; Tsukamoto, Rikako; Mortemousque, Pierre-Andre; Itoh, Kohei M; Ohno, Yuzo; Samukawa, Seiji

    2011-09-01

    The first damage-free top-down fabrication processes for a two-dimensional array of 7 nm GaAs nanodiscs was developed by using ferritin (a protein which includes a 7 nm diameter iron core) bio-templates and neutral beam etching. The photoluminescence of GaAs etched with a neutral beam clearly revealed that the processes could accomplish defect-free etching for GaAs. In the bio-template process, to remove the ferritin protein shell without thermal damage to the GaAs, we firstly developed an oxygen-radical treatment method with a low temperature of 280 °C. Then, the neutral beam etched the defect-free nanodisc structure of the GaAs using the iron core as an etching mask. As a result, a two-dimensional array of GaAs quantum dots with a diameter of ∼ 7 nm, a height of ∼ 10 nm, a high taper angle of 88° and a quantum dot density of more than 7 × 10(11) cm(-2) was successfully fabricated without causing any damage to the GaAs.

  4. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    The energy consumed in manufacturing silicon solar cell modules was calculated for the current process, as well as for 1982 and 1986 projected processes. In addition, energy payback times for the above three sequences are shown. The module manufacturing energy was partitioned two ways. In one way, the silicon reduction, silicon purification, sheet formation, cell fabrication, and encapsulation energies were found. In addition, the facility, equipment, processing material and direct material lost-in-process energies were appropriated in junction formation processes and full module manufacturing sequences. A brief methodology accounting for the energy of silicon wafers lost-in-processing during cell manufacturing is described.

  5. One-Step and Templateless Electropolymerization Process Using Thienothiophene Derivatives To Develop Arrays of Nanotubes and Tree-like Structures with High Water Adhesion.

    PubMed

    Ramos Chagas, Gabriela; Darmanin, Thierry; Guittard, Frédéric

    2016-08-31

    Here, we report for the first time the possibility to obtain not only arrays of nanotubes but also tree-like structures with high water adhesion using a one-step and templateless electropolymerization process. Using thienothiophene derivatives, particularly thieno[2,3-b]thiophene (Thienothiophene-1) and thieno[3,2-b]thiophene (Thienothiophene-2), we demonstrate this surface fabrication in organic solvent (dichloromethane) and without any surfactants. The formation of nanotubes is due to the stabilization by the polymer of gas bubbles produced in situ during electropolymerization process, and we show that the water content plays an important role in the formation of gas bubbles even if it is not the unique parameter. Using cyclic voltammetry as an electropolymerization method, the amount of released gas is more significant, but at constant potential it is much easier to control the nanotube formation. It is also possible to obtain arrays of tree-like structures when electropolymerizing with high deposition charges, and the resulting surfaces have high θw with extremely high water adhesion even if the polymers are intrinsically hydrophilic (θ(Y)w ≈ 70°). This work is extremely important for potential applications in water transportation and harvesting, oil/water separation membranes, energy systems, and biosensing. PMID:27509408

  6. CdS and CdS/CdSe sensitized ZnO nanorod array solar cells prepared by a solution ions exchange process

    SciTech Connect

    Chen, Ling; Gong, Haibo; Zheng, Xiaopeng; Zhu, Min; Zhang, Jun; Yang, Shikuan; Cao, Bingqiang

    2013-10-15

    Graphical abstract: - Highlights: • CdS and CdS/CdSe quantum dots are assembled on ZnO nanorods by ion exchange process. • The CdS/CdSe sensitization of ZnO effectively extends the absorption spectrum. • The performance of ZnO/CdS/CdSe cell is improved by extending absorption spectrum. - Abstract: In this paper, cadmium sulfide (CdS) and cadmium sulfide/cadmium selenide (CdS/CdSe) quantum dots (QDs) are assembled onto ZnO nanorod arrays by a solution ion exchange process for QD-sensitized solar cell application. The morphology, composition and absorption properties of different photoanodes were characterized with scanning electron microscope, transmission electron microscope, energy-dispersive X-ray spectrum and Raman spectrum in detail. It is shown that conformal and uniform CdS and CdS/CdSe shells can grow on ZnO nanorod cores. Quantum dot sensitized solar cells based on ZnO/CdS and ZnO/CdS/CdSe nanocable arrays were assembled with gold counter electrode and polysulfide electrolyte solution. The CdS/CdSe sensitization of ZnO can effectively extend the absorption spectrum up to 650 nm, which has a remarkable impact on the performance of a photovoltaic device by extending the absorption spectrum. Preliminary results show one fourth improvement in solar cell efficiency.

  7. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2006-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and

  8. Micro-dent arrays fabricated by a novel net mask laser shock processing on the surface of LY2 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Dai, Feng-Ze; Lu, Jin-Zhong; Zhang, Yong-Kang; Luo, Kai-Yu; Zhang, Lei; Wang, Qing-Wei; Ren, Xu-Dong; Li, Pin

    2012-07-01

    A novel technology called net-mask laser shock processing (NMLSP) was introduced to fabricate micro-dent arrays on the surface of LY2 aluminum alloy. Experimental results showed that the as-fabricated micro-dents whose diameter and depth were about 230-250 μm and 9.3 μm, respectively, was closed to be circular although the original shape of the net mask was square. The height of upwarped area around micro-dent was about 4 μm. Moreover, the interference of neighboring surface shock waves would affect the topography of micro-dents. A dynamic analysis performed by ABAQUS/Explicit code exhibited that the dynamic formation process of micro-dents fabricated by NMLSP, and the simulation results were mostly consistent with experiment results.

  9. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  10. A Vision for an International Multi-Sensor Snow Observing Mission

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Discussions within the international snow remote sensing community over the past two years have led to encouraging consensus regarding the broad outlines of a dedicated snow observing mission. The primary consensus - that since no single sensor type is satisfactory across all snow types and across all confounding factors, a multi-sensor approach is required - naturally leads to questions about the exact mix of sensors, required accuracies, and so on. In short, the natural next step is to collect such multi-sensor snow observations (with detailed ground truth) to enable trade studies of various possible mission concepts. Such trade studies must assess the strengths and limitations of heritage as well as newer measurement techniques with an eye toward natural sensitivity to desired parameters such as snow depth and/or snow water equivalent (SWE) in spite of confounding factors like clouds, lack of solar illumination, forest cover, and topography, measurement accuracy, temporal and spatial coverage, technological maturity, and cost.

  11. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  12. A supervised multi-sensor matched filter for the detection of extracellular action potentials.

    PubMed

    Szymanska, Agnieszka F; Doty, Michael; Scannell, Kathryn V; Nenadic, Zoran

    2014-01-01

    Multi-sensor extracellular recording takes advantage of several electrode channels to record from multiple neurons at the same time. However, the resulting low signal-to-noise ratio (SNR) combined with biological noise makes signal detection, the first step of any neurophysiological data analysis, difficult. A matched filter was therefore designed to better detect extracellular action potentials (EAPs) from multi-sensor extracellular recordings. The detector was tested on tetrode data from a locust antennal lobe and assessed against three trained analysts. 25 EAPs and noise samples were selected manually from the data and used for training. To reduce complexity, the filter assumed that the underlying noise in the data was spatially white. The detector performed with an average TP and FP rate of 84.62% and 16.63% respectively. This high level of performance indicates the algorithm is suitable for widespread use.

  13. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  14. Airborne Multisensor Pod System, Arms control and nonproliferation technologies: Second quarter 1995

    SciTech Connect

    Alonzo, G M; Sanford, N M

    1995-01-01

    This issue focuses on the Airborne Multisensor Pod System (AMPS) which is a collaboration of many of the DOE national laboratories to provide a scientific environment to research multiple sensors and the new information that can be derived from them. The bulk of the research has been directed at nonproliferation applications, but it has also proven useful in environmental monitoring and assessment, and land/water management. The contents of this issue are: using AMPS technology to detect proliferation and monitor resources; combining multisensor data to monitor facilities and natural resources; planning a AMPS mission; SAR pod produces images day or night, rain or shine; MSI pod combines data from multiple sensors; ESI pod will analyze emissions and effluents; and accessing AMPS information on the Internet.

  15. Microlens arrays

    NASA Astrophysics Data System (ADS)

    Hutley, Michael C.; Stevens, Richard F.; Daly, Daniel J.

    1992-04-01

    Microlenses have been with us for a long time as indeed the very word lens reminds us. Many early lenses,including those made by Hooke and Leeuwenhoek in the 17th century were small and resembled lentils. Many languages use the same word for both (French tilentillelt and German "Linse") and the connection is only obscure in English because we use the French word for the vegetable and the German for the optic. Many of the applications for arrays of inicrolenses are also well established. Lippmann's work on integral photography at the turn of the century required lens arrays and stimulated an interest that is very much alive today. At one stage, lens arrays played an important part in high speed photography and various schemes have been put forward to take advantage of the compact imaging properties of combinations of lens arrays. The fact that many of these ingenious schemes have not been developed to their full potential has to a large degree been due to the absence of lens arrays of a suitable quality and cost.

  16. Analysis and Evaluation of Processes and Equipment in Tasks 2 and 4 of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    The significant economic data for the current production multiblade wafering and inner diameter slicing processes were tabulated and compared to data on the experimental and projected multiblade slurry, STC ID diamond coated blade, multiwire slurry and crystal systems fixed abrasive multiwire slicing methods. Cost calculations were performed for current production processes and for 1982 and 1986 projected wafering techniques.

  17. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential

  18. Quality assessment of crude and processed ginger by high-performance liquid chromatography with diode array detection and mass spectrometry combined with chemometrics.

    PubMed

    Deng, Xianmei; Yu, Jiangyong; Zhao, Ming; Zhao, Bin; Xue, Xingyang; Che, ChunTao; Meng, Jiang; Wang, Shumei

    2015-09-01

    A sensitive, simple, and validated high-performance liquid chromatography with diode array detection and mass spectrometry detection method was developed for three ginger-based traditional Chinese herbal drugs, Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata. Chemometrics methods, such as principal component analysis, hierarchical cluster analysis, and analysis of variance, were also employed in the data analysis. The results clearly revealed significant differences among Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata, indicating variations in their chemical compositions during the processing, which may elucidate the relationship of the thermal treatment with the change of the constituents and interpret their different clinical uses. Furthermore, the sample consistency of Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata can also be visualized by high-performance liquid chromatography with diode array detection and mass spectrometry analysis followed by principal component analysis/hierarchical cluster analysis. The comprehensive strategy of liquid chromatography with mass spectrometry analysis coupled with chemometrics should be useful in quality assurance for ginger-based herbal drugs and other herbal medicines. PMID:26174663

  19. Hybrid simulation of the Z-pinch instabilities for profiles generated in the process of wire array implosion in the Saturn pulsed power generator.

    SciTech Connect

    Coverdale, Christine Anne; Travnicek, P.; Hellinger, P.; Fiala, V.; Leboeuf, J. N.; Deeney, Christopher; Sotnikov, Vladimir Isaakovich

    2005-02-01

    Experimental evidence suggests that the energy balance between processes in play during wire array implosions is not well understood. In fact the radiative yields can exceed by several times the implosion kinetic energy. A possible explanation is that the coupling from magnetic energy to kinetic energy as magnetohydrodynamic plasma instabilities develop provides additional energy. It is thus important to model the instabilities produced in the after implosion stage of the wire array in order to determine how the stored magnetic energy can be connected with the radiative yields. To this aim three-dimensional hybrid simulations have been performed. They are initialized with plasma radial density profiles, deduced in recent experiments [C. Deeney et al., Phys. Plasmas 6, 3576 (1999)] that exhibited large x-ray yields, together with the corresponding magnetic field profiles. Unlike previous work, these profiles do not satisfy pressure balance and differ substantially from those of a Bennett equilibrium. They result in faster growth with an associated transfer of magnetic energy to plasma motion and hence kinetic energy.

  20. Quality assessment of crude and processed ginger by high-performance liquid chromatography with diode array detection and mass spectrometry combined with chemometrics.

    PubMed

    Deng, Xianmei; Yu, Jiangyong; Zhao, Ming; Zhao, Bin; Xue, Xingyang; Che, ChunTao; Meng, Jiang; Wang, Shumei

    2015-09-01

    A sensitive, simple, and validated high-performance liquid chromatography with diode array detection and mass spectrometry detection method was developed for three ginger-based traditional Chinese herbal drugs, Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata. Chemometrics methods, such as principal component analysis, hierarchical cluster analysis, and analysis of variance, were also employed in the data analysis. The results clearly revealed significant differences among Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata, indicating variations in their chemical compositions during the processing, which may elucidate the relationship of the thermal treatment with the change of the constituents and interpret their different clinical uses. Furthermore, the sample consistency of Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata can also be visualized by high-performance liquid chromatography with diode array detection and mass spectrometry analysis followed by principal component analysis/hierarchical cluster analysis. The comprehensive strategy of liquid chromatography with mass spectrometry analysis coupled with chemometrics should be useful in quality assurance for ginger-based herbal drugs and other herbal medicines.

  1. Analysis and evaluation of processes and equipment in tasks 2 and 4 of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that the specific add-on costs of the Cz-process can be expected to be reduced by about a factor of three by 1982, and about a factor of five by 1986. A format to guide in the accumulation of the data needed for thorough techno-economic analysis of solar cell production processes was developed.

  2. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1981-01-01

    The effect of solar cell metallization pattern design on solar cell performance and the costs and performance effects of different metallization processes are discussed. Definitive design rules for the front metallization pattern for large area solar cells are presented. Chemical and physical deposition processes for metallization are described and compared. An economic evaluation of the 6 principal metallization options is presented. Instructions for preparing Format A cost data for solar cell manufacturing processes from UPPC forms for input into the SAMIC computer program are presented.

  3. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Goldman, H.

    1981-01-01

    The attributes of the various metallization processes were investigated. It is shown that several metallization process sequences will lead to adequate metallization for large area, high performance solar cells at a metallization add on price in the range of $6. to 12. m squared, or 4 to $.8/W(peak), assuming 15% efficiency. Conduction layer formation by thick film silver or by tin or tin/lead solder leads to metallization add-on prices significantly above the $6. to 12/m squared range c.) The wet chemical processes of electroless and electrolytic plating for strike/barrier layer and conduction layer formation, respectively, seem to be most cost effective.

  4. NEUSORT2.0: a multiple-channel neural signal processor with systolic array buffer and channel-interleaving processing schedule.

    PubMed

    Chen, Tung-Chien; Yang, Zhi; Liu, Wentai; Chen, Liang-Gee

    2008-01-01

    An emerging class of neuroprosthetic devices aims to provide aggressive performance by integrating more complicated signal processing hardware into the neural recording system with a large amount of electrodes. However, the traditional parallel structure duplicating one neural signal processor (NSP) multiple times for multiple channels takes a heavy burden on chip area. The serial structure sequentially switching the processing task between channels requires a bulky memory to store neural data and may has a long processing delay. In this paper, a memory hierarchy of systolic array buffer is proposed to support signal processing interleavingly channel by channel in cycle basis to match up with the data flow of the optimized multiple-channel frontend interface circuitry. The NSP can thus be tightly coupled to the analog frontend interface circuitry and perform signal processing for multiple channels in real time without any bulky memory. Based on our previous one-channel NSP of NEUSORT1.0 [1], the proposed memory hierarchy is realized on NEUSORT2.0 for a 16-channel neural recording system. Compared to 16 of NEUSORT1.0, NEUSORT2.0 demonstrates a 81.50% saving in terms of areaxpower factor.

  5. Multi-sensor data fusion for measurement of complex freeform surfaces

    NASA Astrophysics Data System (ADS)

    Ren, M. J.; Liu, M. Y.; Cheung, C. F.; Yin, Y. H.

    2016-01-01

    Along with the rapid development of the science and technology in fields such as space optics, multi-scale enriched freeform surfaces are widely used to enhance the performance of the optical systems in both functionality and size reduction. Multi-sensor technology is considered as one of the promising methods to measure and characterize these surfaces at multiple scales. This paper presents a multi-sensor data fusion based measurement method to purposely extract the geometric information of the components with different scales which is used to establish a holistic geometry of the surface via data fusion. To address the key problems of multi-sensor data fusion, an intrinsic feature pattern based surface registration method is developed to transform the measured datasets to a common coordinate frame. Gaussian zero-order regression filter is then used to separate each measured data in different scales, and the datasets are fused based on an edge intensity data fusion algorithm within the same wavelength. The fused data at different scales is then merged to form a new surface with holistic multiscale information. Experimental study is presented to verify the effectiveness of the proposed method.

  6. Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context

    NASA Astrophysics Data System (ADS)

    Boudaren, Mohamed El Yazid; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2012-12-01

    The latest developments in Markov models' theory and their corresponding computational techniques have opened new rooms for image and signal modeling. In particular, the use of Dempster-Shafer theory of evidence within Markov models has brought some keys to several challenging difficulties that the conventional hidden Markov models cannot handle. These difficulties are concerned mainly with two situations: multisensor data, where the use of the Dempster-Shafer fusion is unworkable; and nonstationary data, due to the mismatch between the estimated stationary model and the actual data. For each of the two situations, the Dempster-Shafer combination rule has been applied, thanks to the triplet Markov models' formalism, to overcome the drawbacks of the standard Bayesian models. However, so far, both situations have not been considered in the same time. In this article, we propose an evidential Markov chain that uses the Dempster-Shafer combination rule to bring the effect of contextual information into segmentation of multisensor nonstationary data. We also provide the Expectation-Maximization parameters' estimation and the maximum posterior marginal's restoration procedures. To validate the proposed model, experiments are conducted on some synthetic multisensor data and noised images. The obtained segmentation results are then compared to those obtained with conventional approaches to bring out the efficiency of the present model.

  7. Multisensor systems for security of critical infrastructures: concept, data fusion, and experimental results

    NASA Astrophysics Data System (ADS)

    Kastek, M.; Dulski, R.; Życzkowski, M.; Szustakowski, M.; Ciurapiński, W.; Firmanty, K.; Pałka, N.; Bieszczad, G.

    2011-08-01

    The paper presents the concept of a multisensor system for perimeter protection, suitable for stationary and moving objects. The system consists of an active ground radar and thermal and visible cameras. The radar allows the system to locate potential intruders and controls an observation area for system cameras. The multi-sensor system concept ensures significant improvement of the probability of intruder detection and reduction of false alarms, thus increasing the functionality and performance of the whole system. Effective ranges of detection depend on the quality of the applied sensors and the observed scene itself. One of the most important devices used in such systems are IR cameras. The paper discusses the technical possibilities and limitations to use uncooled IR cameras in such a multi-sensor system for perimeter protection. The role of IR cameras in the system was discussed as well as a technical possibilities to detect a human being. The operational distances for perimeter protection are rather high, considering the performance of commercially available thermal cameras. The required spatial resolutions for detection, recognition and identification were calculated and then the detection ranges were estimated using NVTherm software. The results of analysis were finally presented and the comparison of exemplary IR cameras.

  8. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.

    PubMed

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-01-01

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low. PMID:26561817

  9. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.

    PubMed

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-11-06

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.

  10. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  11. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter

    PubMed Central

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-01-01

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low. PMID:26561817

  12. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  13. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  14. Multi-sensor for measuring erythemally weighted irradiance in various directions simultaneously

    NASA Astrophysics Data System (ADS)

    Appelbaum, J.; Peleg, I.; Peled, A.

    2015-08-01

    Estimating the ultraviolet-B (UV-B) solar irradiance and its angular distribution is a matter of interest to both research and commercial institutes. A static multi-sensor instrument is developed in this paper for a simultaneous measuring of the sky and the reflected erythemally weighted UV-B irradiance on multiple inclined surfaces. The instrument employs a pre-developed simple solar irradiance model and a minimum mean square error method to estimate the various irradiance parameters. The multi-sensor instrument comprises a spherical shaped apparatus with the UV-B sensors mounted as follows: seven sky-facing sensors to measure the hemispherical sky irradiance and six sensors facing downwards to measure the reflection from ground. This work aims to devise and outline an elementary, low-cost multi-sensor instrument. The sensor may usefully serve research, commercial, and medical institutes to sample and measure the UV-B irradiance on horizontal as well as on inclined surfaces. The various UV-B calculations for inclined surfaces are aided by the sensor's integrated software.

  15. Classification and Modelling of Urban Micro-Climates Using Multisensoral and Multitemporal Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Bechtel, B.; Langkamp, T.; Böhner, J.; Daneke, C.; Oßenbrügge, J.; Schempp, S.

    2012-07-01

    Remote sensing has widely been used in urban climatology since it has the advantage of a simultaneous synoptic view of the full urban surface. Methods include the analysis of surface temperature patterns, spatial (biophysical) indicators for urban heat island modelling, and flux measurements. Another approach is the automated classification of urban morphologies or structural types. In this study it was tested, whether Local Climate Zones (a new typology of thermally 'rather' homogenous urban morphologies) can be automatically classified from multisensor and multitemporal earth observation data. Therefore, a large number of parameters were derived from different datasets, including multitemporal Landsat data and morphological profiles as well as windowed multiband signatures from an airborne IFSAR-DHM. The results for Hamburg, Germany, show that different datasets have high potential for the differentiation of urban morphologies. Multitemporal thermal data performed very well with up to 96.3 % overall classification accuracy with a neuronal network classifier. The multispectral data reached 95.1 % and the morphological profiles 83.2 %.The multisensor feature sets reached up to 97.4 % with 100 selected features, but also small multisensoral feature sets reached good results. This shows that microclimatic meaningful urban structures can be classified from different remote sensing datasets. Further, the potential of the parameters for spatiotemporal modelling of the mean urban heat island was tested. Therefore, a comprehensive mobile measurement campaign with GPS loggers and temperature sensors on public buses was conducted in order to gain in situ data in high spatial and temporal resolution.

  16. Multi-sensor for measuring erythemally weighted irradiance in various directions simultaneously

    NASA Astrophysics Data System (ADS)

    Appelbaum, J.; Peleg, I.; Peled, A.

    2016-10-01

    Estimating the ultraviolet-B (UV-B) solar irradiance and its angular distribution is a matter of interest to both research and commercial institutes. A static multi-sensor instrument is developed in this paper for a simultaneous measuring of the sky and the reflected erythemally weighted UV-B irradiance on multiple inclined surfaces. The instrument employs a pre-developed simple solar irradiance model and a minimum mean square error method to estimate the various irradiance parameters. The multi-sensor instrument comprises a spherical shaped apparatus with the UV-B sensors mounted as follows: seven sky-facing sensors to measure the hemispherical sky irradiance and six sensors facing downwards to measure the reflection from ground. This work aims to devise and outline an elementary, low-cost multi-sensor instrument. The sensor may usefully serve research, commercial, and medical institutes to sample and measure the UV-B irradiance on horizontal as well as on inclined surfaces. The various UV-B calculations for inclined surfaces are aided by the sensor's integrated software.

  17. Process research of non-cz silicon material. Low cost solar array project, cell and module formation research area

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated.

  18. Analysis and evaluation of process and equipment in tasks 2 and 4 of the Low Cost Solar Array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that substantial cost reductions can be realized from technical advancements which fall into four categories: an increase in furnace productivity; the reduction of crucible cost through use of the crucible for the equivalent of multiple state-of-the-art crystals; the combined effect of several smaller technical improvements; and a carry over effect of the expected availability of semiconductor grade polysilicon at greatly reduced prices. A format for techno-economic analysis of solar cell production processes was developed, called the University of Pennsylvania Process Characterization (UPPC) format. The accumulated Cz process data are presented.

  19. Analysis and Evaluation of Processes and Equipment in Tasks 2 and 4 of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1979-01-01

    To facilitate the task of objectively comparing competing process options, a methodology was needed for the quantitative evaluation of their relative cost effectiveness. Such a methodology was developed and is described, together with three examples for its application. The criterion for the evaluation is the cost of the energy produced by the system. The method permits the evaluation of competing design options for subsystems, based on the differences in cost and efficiency of the subsystems, assuming comparable reliability and service life, or of competing manufacturing process options for such subsystems, which include solar cells or modules. This process option analysis is based on differences in cost, yield, and conversion efficiency contribution of the process steps considered.

  20. Analysis and evaluation in the production process and equipment area of the Low-Cost Solar Array Project

    SciTech Connect

    Wolf, M.

    1980-07-01

    The solar cell metallization processes show a wide range of technical limitations, which influence solar cell performance. These limitations interact with the metallization pattern design, which is particularly critical for large square or round cells. To lay the basis for a process capability-cost-solar cell performance-value evaluation and trade-off study, the theoretical background of the metallization design-solar cell performance relationship was examined. Conclusions are presented. (WHK)

  1. Low cost solar array project. Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Technical activities are reported in the design of process, facilities, and equipment for producing silicon at a rate and price comensurate with production goals for low cost solar cell modules. The silane-silicone process has potential for providing high purity poly-silicon on a commercial scale at a price of fourteen dollars per kilogram by 1986, (1980 dollars). Commercial process, economic analysis, process support research and development, and quality control are discussed.

  2. Qualitative model-based multisensor data fusion and parameter estimation using ∞-norm Dempster-Shafer evidential reasoning

    NASA Astrophysics Data System (ADS)

    Reece, Steven

    1997-07-01

    This paper is concerned with model-based parameter estimation for noisy processes when the process models are incomplete or imprecise. The underlying representation of our models is qualitative in the sense of Interval Arithmetic and Qualitative Reasoning and Qualitative Physics from the Artificial Intelligence literature. We adopt a specific qualitative representation, namely that advocated by Kuipers, in which a well defined mathematical description of a qualitative model is given in terms of operations on intervals of the reals. We investigate an weighted opinion pool formalism for multi-sensor data fusion, develop a definition for unbiased estimation on quantity-spaces and derive a consistent mass assignment function for mean estimators for two state systems. This is extended to representations involving more than two states by utilizing the relationships between coarse (i.e. two state) and fine (i.e. N state) representations explored by Shafer. We then generalized the Dempster-Shafer Theory of Evidence to a finite set of theories and show how an extreme theory can be used to develop mean minimum-mean-square-error estimators applicable to situations with correlated noise. We demonstrate our theory using real data from a mobile robot application which utilizes sonar and laser time-of-flight and gyroscope information to disseminate surface curvature.

  3. Fast contactless vibrating structure characterization using real time field programmable gate array-based digital signal processing: demonstrations with a passive wireless acoustic delay line probe and vision.

    PubMed

    Goavec-Mérou, G; Chrétien, N; Friedt, J-M; Sandoz, P; Martin, G; Lenczner, M; Ballandras, S

    2014-01-01

    Vibrating mechanical structure characterization is demonstrated using contactless techniques best suited for mobile and rotating equipments. Fast measurement rates are achieved using Field Programmable Gate Array (FPGA) devices as real-time digital signal processors. Two kinds of algorithms are implemented on FPGA and experimentally validated in the case of the vibrating tuning fork. A first application concerns in-plane displacement detection by vision with sampling rates above 10 kHz, thus reaching frequency ranges above the audio range. A second demonstration concerns pulsed-RADAR cooperative target phase detection and is applied to radiofrequency acoustic transducers used as passive wireless strain gauges. In this case, the 250 ksamples/s refresh rate achieved is only limited by the acoustic sensor design but not by the detection bandwidth. These realizations illustrate the efficiency, interest, and potentialities of FPGA-based real-time digital signal processing for the contactless interrogation of passive embedded probes with high refresh rates. PMID:24517814

  4. Fast contactless vibrating structure characterization using real time field programmable gate array-based digital signal processing: Demonstrations with a passive wireless acoustic delay line probe and vision

    NASA Astrophysics Data System (ADS)

    Goavec-Mérou, G.; Chrétien, N.; Friedt, J.-M.; Sandoz, P.; Martin, G.; Lenczner, M.; Ballandras, S.

    2014-01-01

    Vibrating mechanical structure characterization is demonstrated using contactless techniques best suited for mobile and rotating equipments. Fast measurement rates are achieved using Field Programmable Gate Array (FPGA) devices as real-time digital signal processors. Two kinds of algorithms are implemented on FPGA and experimentally validated in the case of the vibrating tuning fork. A first application concerns in-plane displacement detection by vision with sampling rates above 10 kHz, thus reaching frequency ranges above the audio range. A second demonstration concerns pulsed-RADAR cooperative target phase detection and is applied to radiofrequency acoustic transducers used as passive wireless strain gauges. In this case, the 250 ksamples/s refresh rate achieved is only limited by the acoustic sensor design but not by the detection bandwidth. These realizations illustrate the efficiency, interest, and potentialities of FPGA-based real-time digital signal processing for the contactless interrogation of passive embedded probes with high refresh rates.

  5. Fabrication of long-focal-length plano-convex microlens array by combining the micro-milling and injection molding processes.

    PubMed

    Chen, Lei; Kirchberg, Stefan; Jiang, Bing-Yan; Xie, Lei; Jia, Yun-Long; Sun, Lei-Lei

    2014-11-01

    A uniform plano-convex spherical microlens array with a long focal length was fabricated by combining the micromilling and injection molding processes in this work. This paper presents a quantitative study of the injection molding process parameters on the uniformity of the height of the microlenses. The variation of the injection process parameters, i.e., barrel temperature, mold temperature, injection speed, and packing pressure, was found to have a significant effect on the uniformity of the height of the microlenses, especially the barrel temperature. The filling-to-packing switchover point is also critical to the uniformity of the height of the microlenses. The optimal uniformity was achieved when the polymer melts completely filled the mold cavity, or even a little excessively filled the cavity, during the filling stage. In addition, due to the filling resistance, the practical filling-to-packing switchover point can vary with the change of the filling processing conditions and lead to a non-negligible effect on the uniformity of the height of the microlenses. Furthermore, the effect of injection speed on the uniformity of the height of the microlenses was analyzed in detail. The results indicated that the effect of injection speed on the uniformity of the height of the microlenses is mainly attributed to the two functions of injection speed: transferring the filling-to-packing switchover point and affecting the distribution of residual flow stress in the polymer melt. PMID:25402902

  6. Effect of thermal implying during ageing process of nanorods growth on the properties of zinc oxide nanorod arrays

    NASA Astrophysics Data System (ADS)

    Ismail, A. S.; Mamat, M. H.; Malek, M. F.; Abdullah, M. A. R.; Sin, M. D.; Rusop, M.

    2016-07-01

    Undoped and Sn-doped Zinc oxide (ZnO) nanostructures have been fabricated using a simple sol-gel immersion method at 95°C of growth temperature. Thermal sourced by hot plate stirrer was supplied to the solution during ageing process of nanorods growth. The results showed significant decrement in the quality of layer produced after the immersion process where the conductivity and porosity of the samples reduced significantly due to the thermal appliance. The structural properties of the samples have been characterized using field emission scanning electron microscopy (FESEM) electrical properties has been characterized using current voltage (I-V) measurement.

  7. Pacific Array

    NASA Astrophysics Data System (ADS)

    Kawakatsu, H.; Takeo, A.; Isse, T.; Nishida, K.; Shiobara, H.; Suetsugu, D.

    2014-12-01

    Based on our recent results on broadband ocean bottom seismometry, we propose a next generation large-scale array experiment in the ocean. Recent advances in ocean bottom broadband seismometry (e.g., Suetsugu & Shiobara, 2014, Annual Review EPS), together with advances in the seismic analysis methodology, have now enabled us to resolve the regional 1-D structure of the entire lithosphere/asthenosphere system, including seismic anisotropy (both radial and azimuthal), with deployments of ~10-15 broadband ocean bottom seismometers (BBOBSs) (namely "ocean-bottom broadband dispersion survey"; Takeo et al., 2013, JGR; Kawakatsu et al., 2013, AGU; Takeo, 2014, Ph.D. Thesis; Takeo et al., 2014, JpGU). Having ~15 BBOBSs as an array unit for 2-year deployment, and repeating such deployments in a leap-frog way (an array of arrays) for a decade or so would enable us to cover a large portion of the Pacific basin. Such efforts, not only by giving regional constraints on the 1-D structure, but also by sharing waveform data for global scale waveform tomography, would drastically increase our knowledge of how plate tectonics works on this planet, as well as how it worked for the past 150 million years. International collaborations might be sought.

  8. Arrays of stacked metal coordination compounds

    DOEpatents

    Bulkowski, J.E.

    1986-10-21

    A process is disclosed for preparing novel arrays of metal coordination compounds characterized by arrangement of the metal ions, separated by a linking agent, in stacked order one above the other. The process permits great flexibility in the design of the array. For example, layers of different composition can be added to the array at will. 3 figs.

  9. Arrays of stacked metal coordination compounds

    DOEpatents

    Bulkowski, John E.

    1986-01-01

    A process is disclosed for preparing novel arrays of metal coordination compounds characterized by arrangement of the metal ions, separated by a linking agent, in stacked order one above the other. The process permits great flexibility in the design of the array. For example, layers of different composition can be added to the array at will.

  10. Flat-plate solar array project: Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The process technology for the manufacture of semiconductor-grade silicon in a large commercial plant by 1986, at a price less than $14 per kilogram of silicon based on 1975 dollars is discussed. The engineering design, installation, checkout, and operation of an Experimental Process System Development unit was discussed. Quality control of scaling-up the process and an economic analysis of product and production costs are discussed.

  11. Flat-plate solar array project: Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The engineering design, fabrication, assembly, operation, economic analysis, and process support research and development for an Experimental Process System Development Unit for producing semiconductor-grade silicon using the slane-to-silicon process are reported. The design activity was completed. About 95% of purchased equipment was received. The draft of the operations manual was about 50% complete and the design of the free-space system continued. The system using silicon power transfer, melting, and shotting on a psuedocontinuous basis was demonstrated.

  12. Multi-sensor approach for a satellite detection and characterization of Mediterranean Hurricanes: a case study

    NASA Astrophysics Data System (ADS)

    Laviola, Sante; Valeri, Massimo; Marcello Miglietta, Mario; Levizzani, Vincenzo

    2014-05-01

    The extreme events on the Mediterranean basin are often associated to well-organized mesoscale systems, which usually develop over Northern Africa intensifying in presence of warm sea surface and cold air from the North. Although the synoptic conditions are often well known, the physical processes behind the genesis and development of a particular kind of these mesoscale systems called Medicane or Tropical-like Cyclone (TLC) is not well understood. A Medicane is a Mediterranean cyclogenesis with characteristics similar to those of the tropical cyclones such as spiral-like cloud bands and the presence of an "eye". The aim of this study is the improvement of the current knowledge on the Medicane structure using a satellite multi-sensor approach. Recent studies (Miglietta et al. 2013) based on the numerical model WRF demonstrate that a Medicane structure can be clearly identified by analyzing its thermal symmetry between 600 and 900 hPa: the presence of a warm core uniquely distinguishes between Mediterranean TLCs from baroclinic cyclones. The challenge of this study is the description of the physical structure of a Medicane only by using the satellite sensors. However, in the current version of the algorithm the wind field required to calculate the vorticity parameter is provided by the WRF model. The computational scheme of the algorithm quantifies the external features and the inner properties of a possible TLC: the geometrical symmetry often but not always spiral-shaped, type and altitude of clouds, and the distribution of precipitation patterns are significant elements to flag an intense Mediterranean cyclogenesis as Medicane. The method also takes into account the electrical activity of the storm in terms of number of strokes during the last 24 hours to refine the TLC identification. Keywords: Satellite, Microwave radiometry, Medicane, retrieval methods, Remote sensing Reference Miglietta, M. M., S. Laviola, A, Malvaldi, D. Conte, V. Levizzani, and C. Price

  13. Autonomous collection of dynamically-cued multi-sensor imagery

    NASA Astrophysics Data System (ADS)

    Daniel, Brian; Wilson, Michael L.; Edelberg, Jason; Jensen, Mark; Johnson, Troy; Anderson, Scott

    2011-05-01

    The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include: NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution, narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be presented.

  14. Integrated residential photovoltaic array development

    NASA Astrophysics Data System (ADS)

    Shepard, N. F., Jr.

    1981-12-01

    An advanced, universally-mountable, integrated residential photovoltaic array concept was defined based upon an in-depth formulation and evaluation of three candidate approaches which were synthesized from existing or proposed residential array concepts. The impact of module circuitry and process sequence is considered and technology gaps and performance drivers associated with residential photovoltaic array concepts are identified. The actual learning experience gained from the comparison of the problem areas of the hexagonal shingle design with the rectangular module design led to what is considered an advanced array concept. Building the laboratory mockup provided actual experience and the opportunity to uncover additional technology gaps.

  15. Integrated residential photovoltaic array development

    NASA Technical Reports Server (NTRS)

    Shepard, N. F., Jr.

    1981-01-01

    An advanced, universally-mountable, integrated residential photovoltaic array concept was defined based upon an in-depth formulation and evaluation of three candidate approaches which were synthesized from existing or proposed residential array concepts. The impact of module circuitry and process sequence is considered and technology gaps and performance drivers associated with residential photovoltaic array concepts are identified. The actual learning experience gained from the comparison of the problem areas of the hexagonal shingle design with the rectangular module design led to what is considered an advanced array concept. Building the laboratory mockup provided actual experience and the opportunity to uncover additional technology gaps.

  16. Low cost solar array project. Cell and module formation research area. Process research of non-CZ silicon material

    NASA Astrophysics Data System (ADS)

    1983-02-01

    Liquid diffusion masks and liquid dopants to replace the more expensive CVD SiO2 mask and gaseous diffusion processes were investigated. Silicon pellets were prepared in the silicon shot tower; and solar cells were fabricated using web grown where the pellets were used as a replenishment material. Verification runs were made using the boron dopant and liquid diffusion mask materials. The average of cells produced in these runs was 13%. The relationship of sheet resistivity, temperature, gas flows, and gas composition for the diffusion of the P-8 liquid phosphorus solution was investigated. Solar cells processed from web grown from Si shot material were evaluated, and results qualified the use of the material produced in the shot tower for web furnace feed stock.

  17. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1982-01-01

    It was found that the Solarex metallization design and process selection should be modified to yield substantially higher output of the 10 cm x 10 cm cells, while the Westinghouse design is extremely close to the optimum. In addition, further attention to the Solarex pn junction and base high/low junction formation processes could be beneficial. For the future efficiency improvement, it was found that refinement of the various minority carrier lifetime measurement methods is needed, as well as considerably increased sophistication in the interpretation of the results of these methods. In addition, it was determined that further experimental investigation of the Auger lifetime is needed, to conclusively determine the Auger coefficients for the direct Auger recombination at high majority carrier concentrations.

  18. Low cost solar array project. Cell and module formation research area. Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Liquid diffusion masks and liquid dopants to replace the more expensive CVD SiO2 mask and gaseous diffusion processes were investigated. Silicon pellets were prepared in the silicon shot tower; and solar cells were fabricated using web grown where the pellets were used as a replenishment material. Verification runs were made using the boron dopant and liquid diffusion mask materials. The average of cells produced in these runs was 13%. The relationship of sheet resistivity, temperature, gas flows, and gas composition for the diffusion of the P-8 liquid phosphorus solution was investigated. Solar cells processed from web grown from Si shot material were evaluated, and results qualified the use of the material produced in the shot tower for web furnace feed stock.

  19. Silicon materials task of the Low Cost Solar Array Project: Effect of impurities and processing on silicon solar cells

    NASA Technical Reports Server (NTRS)

    Hopkins, R. H.; Davis, J. R.; Rohatgi, A.; Hanes, M. H.; Rai-Choudhury, P.; Mollenkopf, H. C.

    1982-01-01

    The effects of impurities and processing on the characteristics of silicon and terrestrial silicon solar cells were defined in order to develop cost benefit relationships for the use of cheaper, less pure solar grades of silicon. The amount of concentrations of commonly encountered impurities that can be tolerated in typical p or n base solar cells was established, then a preliminary analytical model from which the cell performance could be projected depending on the kinds and amounts of contaminants in the silicon base material was developed. The impurity data base was expanded to include construction materials, and the impurity performace model was refined to account for additional effects such as base resistivity, grain boundary interactions, thermal processing, synergic behavior, and nonuniform impurity distributions. A preliminary assessment of long term (aging) behavior of impurities was also undertaken.

  20. User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org.

    PubMed

    Eijssen, Lars M T; Jaillard, Magali; Adriaens, Michiel E; Gaj, Stan; de Groot, Philip J; Müller, Michael; Evelo, Chris T

    2013-07-01

    Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278