Science.gov

Sample records for multisensor array processing

  1. PROCESSING TECHNIQUES FOR DISCRIMINATION BETWEEN BURIED UXO AND CLUTTER USING MULTISENSOR ARRAY DATA

    EPA Science Inventory

    The overall objective of this project is to develop reliable techniques for discriminating between buried UXO and clutter using multisensor electromagnetic induction sensor array data. The basic idea is to build on existing research which exploits differences in shape between or...

  2. Highly reliable multisensor array (MSA) smart transducers

    NASA Astrophysics Data System (ADS)

    Perotti, José; Lucena, Angel; Mackey, Paul; Mata, Carlos; Immer, Christopher

    2006-05-01

    Many developments in the field of multisensor array (MSA) transducers have taken place in the last few years. Advancements in fabrication technology, such as Micro-Electro-Mechanical Systems (MEMS) and nanotechnology, have made implementation of MSA devices a reality. NASA Kennedy Space Center (KSC) has been developing this type of technology because of the increases in safety, reliability, and performance and the reduction in operational and maintenance costs that can be achieved with these devices. To demonstrate the MSA technology benefits, KSC quantified the relationship between the number of sensors (N) and the associated improvement in sensor life and reliability. A software algorithm was developed to monitor and assess the health of each element and the overall MSA. Furthermore, the software algorithm implemented criteria on how these elements would contribute to the MSA-calculated output to ensure required performance. The hypothesis was that a greater number of statistically independent sensor elements would provide a measurable increase in measurement reliability. A computer simulation was created to answer this question. An array of N sensors underwent random failures in the simulation and a life extension factor (LEF equals the percentage of the life of a single sensor) was calculated by the program. When LEF was plotted as a function of N, a quasiexponential behavior was detected with marginal improvement above N = 30. The hypothesis and follow-on simulation results were then corroborated experimentally. An array composed of eight independent pressure sensors was fabricated. To accelerate sensor life cycle and failure and to simulate degradation over time, the MSA was exposed to an environmental tem-perature of 125°C. Every 24 hours, the experiment's environmental temperature was returned to ambient temperature (27°C), and the outputs of all the MSA sensor elements were measured. Once per week, the MSA calibration was verified at five different

  3. Information coding in artificial olfaction multisensor arrays.

    PubMed

    Albert, Keith J; Walt, David R

    2003-08-15

    High-density sensor arrays were prepared with microbead vapor sensors to explore and compare the information coded in sensor response profiles following odor stimulus. The coded information in the sensor-odor response profiles, which is used for odor discrimination purposes, was extracted from the microsensor arrays via two different approaches. In the first approach, the responses from individual microsensors were separated (decoded array) and independently processed. In the second approach, response profiles from all microsensors within the entire array, i.e., the sensor ensemble, were combined to create one response per odor stimulus (nondecoded array). Although the amount of response data is markedly reduced in the second approach, the system shows comparable odor discrimination rates for the two signal extraction methods. The ensemble approach streamlines system resources without decreasing system performance. These signal compression approaches may simulate or parallel information coding in the mammalian olfactory system. PMID:14632130

  4. Hybrid integration process for the development of multisensor chips

    NASA Astrophysics Data System (ADS)

    Jin, Na; Liu, Weiguo

    A novel hybrid integration process had been developed for the integration of single crystal pyroelectric detector with readout IC based on a thinning and anisotropic conduction tape bonding technique. We report our recent progress in applying the hybrid integration process for the fabrication of a multisensor chip with thermal and sound detectors integrated. The sound detector in the multisensor chip is based on thinned single crystal quartz, while the thermal detector in the chip is making use of thinned PLZT ceramic wafer. A membrane transfer process (MTP) was applied for the thinning and integration of the single crystal and ceramic wafers.

  5. Graphene- and graphene oxide- based multisensor arrays for selective gas analysis

    NASA Astrophysics Data System (ADS)

    Lipatov, Alexey; Varezhnikov, Alexey; Sysoev, Victor; Kolmakov, Andrei; Sinitskii, Alexander

    2014-03-01

    Arrays of nearly identical graphene devices on Si/SiO2 exhibit a substantial device-to-device variation, even in case of a high-quality chemical vapor deposition (CVD) or mechanically exfoliated graphene. We propose that such device-to-device variation could provide a platform for highly selective multisensor electronic olfactory systems. We fabricated a multielectode array of CVD graphene devices on a Si/SiO2 substrate, and demonstrated that the diversity of these devices is sufficient to reliably discriminate different short-chain alcohols: methanol, ethanol and isopropanol. The diversity of graphene devices on Si/SiO2 could possibly be used to construct multisensor systems trained to recognize other analytes as well. Similar multisensory arrays based on graphene oxide (GO) devices are also capable of discriminating these short-chain alcohols. We will discuss the possibility of chemical modification of GO for further increase the selectivity of GO multisensory arrays.

  6. Breath analysis system for early detection of lung diseases based on multi-sensor array

    NASA Astrophysics Data System (ADS)

    Jeon, Jin-Young; Yu, Joon-Boo; Shin, Jeong-Suk; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Expiratory breath contains various VOCs(Volatile Organic Compounds) produced from the human. When a certain disease exists, the exhalation has specific VOCs which may be generated from diseases. Many researchers have been actively working to find different types of biomarkers which are characteristic for particular diseases. Research regarding the identification of specific diseases from exhalation is still in progress. The aim of this research is to implement early detection of lung disease such as lung cancer and COPD(Chronic Obstructive Pulmonary Disease), which was nominated on the 6th of domestic death rate in 2010, based on multi-sensor array system. The system has been used to acquire sampled expiratory gases data and PCA(Principle Component Analysis) technique was applied to analyze signals from multi-sensor array. Throughout the experimental trials, a clearly distinguishable difference between lung disease patients and healthy controls was found from the measurement and analysis of their respective expiratory gases.

  7. Could We Apply a NeuroProcessor For Analyzing a Gas Response Of Multisensor Arrays?

    SciTech Connect

    Sysoev, V. V.; Musatov, V. Yu.; Maschenko, A. A.; Varegnikov, A. S.; Chrizostomov, A. A.; Kiselev, I.; Schneider, T.; Bruns, M.; Sommer, M.

    2009-05-23

    We describe an effort of implementation of hardware neuroprocessor to carry out pattern recognition of signals generated by a multisensor microarray of Electronic Nose type. The multisensor microarray is designed with the SnO{sub 2} thin film segmented by co-planar electrodes according to KAMINA (KArlsruhe Micro NAse) E-nose architecture. The response of this microarray to reducing gases mixtured with a synthetic air is processed by principal component analysis technique realized in PC (Matlab software) and the neural microprocessor NeuroMatrix NM6403. It is shown that the neuroprocessor is able to successfully carry out a gas-recognition algorithm at a real-time scale.

  8. A radiosonde using a humidity sensor array with a platinum resistance heater and multi-sensor data fusion.

    PubMed

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  9. A Radiosonde Using a Humidity Sensor Array with a Platinum Resistance Heater and Multi-Sensor Data Fusion

    PubMed Central

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  10. Optical sensors and multisensor arrays containing thin film electroluminescent devices

    DOEpatents

    Aylott, Jonathan W.; Chen-Esterlit, Zoe; Friedl, Jon H.; Kopelman, Raoul; Savvateev, Vadim N.; Shinar, Joseph

    2001-12-18

    Optical sensor, probe and array devices for detecting chemical biological, and physical analytes. The devices include an analyte-sensitive layer optically coupled to a thin film electroluminescent layer which activates the analyte-sensitive layer to provide an optical response. The optical response varies depending upon the presence of an analyte and is detected by a photodetector and analyzed to determine the properties of the analyte.

  11. Multi-sensor Array for High Altitude Balloon Missions to the Stratosphere

    NASA Astrophysics Data System (ADS)

    Davis, Tim; McClurg, Bryce; Sohl, John

    2008-10-01

    We have designed and built a microprocessor controlled and expandable multi-sensor array for data collection on near space missions. Weber State University has started a high altitude research balloon program called HARBOR. This array has been designed to data log a base set of measurements for every flight and has room for six guest instruments. The base measurements are absolute pressure, on-board temperature, 3-axis accelerometer for attitude measurement, and 2-axis compensated magnetic compass. The system also contains a real time clock and circuitry for logging data directly to a USB memory stick. In typical operation the measurements will be cycled through in sequence and saved to the memory stick along with the clock's time stamp. The microprocessor can be reprogrammed to adapt to guest experiments with either analog or digital interfacing. This system will fly with every mission and will provide backup data collection for other instrumentation for which the primary task is measuring atmospheric pressure and temperature. The attitude data will be used to determine the orientation of the onboard camera systems to aid in identifying features in the images. This will make these images easier to use for any future GIS (geographic information system) remote sensing missions.

  12. Concept of data processing in multisensor system for perimeter protection

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Kastek, M.; Trzaskawka, P.; Piątkowski, T.; Szustakowski, M.; Życzkowski, M.

    2011-06-01

    The nature of recent terrorist attacks and military conflicts as well as the necessity to protect bases, convoys and patrols gave serious impact to the development of more effective security systems. Widely-used so far concepts of perimeter protection with zone sensors will be replaced in the near future with multi-sensor systems. This kind of systems can utilize day/night cameras, IR uncooled thermal cameras as well as millimeter-wave radars detecting radiation reflected from target. Ranges of detection, recognition and identification for all targets depends on the parameters of the sensors used and the observed scene itself. Apart from the sensors the most important elements that influence the system effectiveness is intelligent data analysis and a proper data fusion algorithm. A multi-sensor protection system allows to achieve significant improvement of detection probability of intruder. The concept of data fusion in multi-sensor system has been introduced. It is based on image fusion algorithm which allows visualizing and tracking intruders under any conditions.

  13. MULTISENSOR DATA FUSION FOR HIGH QUALITY DATA ANALYSIS AND PROCESSING IN MEASUREMENT AND INSTRUMENTATION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper focuses on application of multisensor data fusion for high quality data analysis and processing in measurement and instrumentation. A practical, general data fusion scheme is established on the basis of feature extraction and merging of data from multiple sensors. This scheme integrates ...

  14. Array signal processing

    SciTech Connect

    Haykin, S.; Justice, J.H.; Owsley, N.L.; Yen, J.L.; Kak, A.C.

    1985-01-01

    This is the first book to be devoted completely to array signal processing, a subject that has become increasingly important in recent years. The book consists of six chapters. Chapter 1, which is introductory, reviews some basic concepts in wave propagation. The remaining five chapters deal with the theory and applications of array signal processing in (a) exploration seismology, (b) passive sonar, (c) radar, (d) radio astronomy, and (e) tomographic imaging. The various chapters of the book are self-contained. The book is written by a team of five active researchers, who are specialists in the individual fields covered by the pertinent chapters.

  15. Multisensor Network System for Wildfire Detection Using Infrared Image Processing

    PubMed Central

    Bosch, I.; Serrano, A.; Vergara, L.

    2013-01-01

    This paper presents the next step in the evolution of multi-sensor wireless network systems in the early automatic detection of forest fires. This network allows remote monitoring of each of the locations as well as communication between each of the sensors and with the control stations. The result is an increased coverage area, with quicker and safer responses. To determine the presence of a forest wildfire, the system employs decision fusion in thermal imaging, which can exploit various expected characteristics of a real fire, including short-term persistence and long-term increases over time. Results from testing in the laboratory and in a real environment are presented to authenticate and verify the accuracy of the operation of the proposed system. The system performance is gauged by the number of alarms and the time to the first alarm (corresponding to a real fire), for different probability of false alarm (PFA). The necessity of including decision fusion is thereby demonstrated. PMID:23843734

  16. Identification and quantification of individual volatile organic compounds in a binary mixture by SAW multisensor array and pattern recognition analysis

    NASA Astrophysics Data System (ADS)

    Penza, M.; Cassano, G.; Tortorella, F.

    2002-06-01

    We have developed a surface acoustic wave (SAW) multisensor array with five acoustic sensing elements configured as two-port resonator 433.92 MHz oscillators and a reference SAW element to recognize different individual components and determine their concentrations in a binary mixture of volatile organic compounds (VOCs) such as methanol and acetone, in the ranges 15-130 and 50-250 ppm, respectively. The SAW sensors have been specifically coated by various sensing thin films such as arachidic acid, carbowax, behenic acid, triethanolamine or acrylated polysiloxane, operating at room temperature. By using the relative frequency change as the output signal of the SAW multisensor array with an artificial neural network (ANN), a recognition system has been realized for the identification and quantification of tested VOCs. The features of the SAW multisensor array exposed to a binary component organic mixture of methanol and acetone have been extracted from the output signals of five SAW sensors by pattern recognition (PARC) techniques, such as principal component analysis (PCA). An organic vapour pattern classifier has been implemented by using a multilayer neural network with a backpropagation learning algorithm. The normalized responses of a reduced set of SAW sensors or selected principal components scores have been used as inputs for a feed-forward multilayer perceptron (MLP), resulting in a 70% correct recognition rate with the normalized responses of the four SAW sensors and in an enhanced 80% correct recognition rate with the first two principal components of the original data consisting of the normalized responses of the four SAW sensors. The prediction of the individual vapour concentrations has been tackled with PCA for features extraction and by using the first two principal components scores as inputs to a feed-forward MLP consisting of a gating network, which decides which of three specific subnets should be used to determine the output concentration: the

  17. Metal oxide based multisensor array and portable database for field analysis of antioxidants

    PubMed Central

    Sharpe, Erica; Bradley, Ryan; Frasco, Thalia; Jayathilaka, Dilhani; Marsh, Amanda; Andreescu, Silvana

    2014-01-01

    We report a novel chemical sensing array based on metal oxide nanoparticles as a portable and inexpensive paper-based colorimetric method for polyphenol detection and field characterization of antioxidant containing samples. Multiple metal oxide nanoparticles with various polyphenol binding properties were used as active sensing materials to develop the sensor array and establish a database of polyphenol standards that include epigallocatechin gallate, gallic acid, resveratrol, and Trolox among others. Unique charge-transfer complexes are formed between each polyphenol and each metal oxide on the surface of individual sensors in the array, creating distinct optically detectable signals which have been quantified and logged into a reference database for polyphenol identification. The field-portable Pantone/X-Rite© CapSure® color reader was used to create this database and to facilitate rapid colorimetric analysis. The use of multiple metal-oxide sensors allows for cross-validation of results and increases accuracy of analysis. The database has enabled successful identification and quantification of antioxidant constituents within real botanical extractions including green tea. Formation of charge-transfer complexes is also correlated with antioxidant activity exhibiting electron transfer capabilities of each polyphenol. The antioxidant activity of each sample was calculated and validated against the oxygen radical absorbance capacity (ORAC) assay showing good comparability. The results indicate that this method can be successfully used for a more comprehensive analysis of antioxidant containing samples as compared to conventional methods. This technology can greatly simplify investigations into plant phenolics and make possible the on-site determination of antioxidant composition and activity in remote locations. PMID:24610993

  18. Distributed multisensor processing, decision making, and control under constrained resources for remote health and environmental monitoring

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Sheikh, Tanwir; Chandramouli, Lavanya

    2004-04-01

    Previous field-deployable distributed sensing systems for health/biomedical applications and environmental sensing have been designed for data collection and data transmission at pre-set intervals, rather than for on-board processing These previous sensing systems lack autonomous capabilities, and have limited lifespans. We propose the use of an integrated machine learning architecture, with automated planning-scheduling and resource management capabilities that can be used for a variety of autonomous sensing applications with very limited computing, power, and bandwidth resources. We lay out general solutions for efficient processing in a multi-tiered (three-tier) machine learning framework that is suited for remote, mobile sensing systems. Novel dimensionality reduction techniques that are designed for classification are used to compress each individual sensor data and pass only relevant information to the mobile multisensor fusion module (second-tier). Statistical classifiers that are capable of handling missing/partial sensory data due to sensor failure or power loss are used to detect critical events and pass the information to the third tier (central server) for manual analysis and/or analysis by advanced pattern recognition techniques. Genetic optimisation algorithms are used to control the system in the presence of dynamic events, and also ensure that system requirements (i.e. minimum life of the system) are met. This tight integration of control optimisation and machine learning algorithms results in a highly efficient sensor network with intelligent decision making capabilities. The applicability of our technology in remote health monitoring and environmental monitoring is shown. Other uses of our solution are also discussed.

  19. Acoustic signal processing toolbox for array processing

    NASA Astrophysics Data System (ADS)

    Pham, Tien; Whipps, Gene T.

    2003-08-01

    The US Army Research Laboratory (ARL) has developed an acoustic signal processing toolbox (ASPT) for acoustic sensor array processing. The intent of this document is to describe the toolbox and its uses. The ASPT is a GUI-based software that is developed and runs under MATLAB. The current version, ASPT 3.0, requires MATLAB 6.0 and above. ASPT contains a variety of narrowband (NB) and incoherent and coherent wideband (WB) direction-of-arrival (DOA) estimation and beamforming algorithms that have been researched and developed at ARL. Currently, ASPT contains 16 DOA and beamforming algorithms. It contains several different NB and WB versions of the MVDR, MUSIC and ESPRIT algorithms. In addition, there are a variety of pre-processing, simulation and analysis tools available in the toolbox. The user can perform simulation or real data analysis for all algorithms with user-defined signal model parameters and array geometries.

  20. Multi-sensor magnetoencephalography with atomic magnetometers

    PubMed Central

    Johnson, Cort N; Schwindt, P D D; Weisend, M

    2014-01-01

    The authors have detected magnetic fields from the human brain with two independent, simultaneously operating rubidium spin-exchange-relaxation-free magnetometers. Evoked responses from auditory stimulation were recorded from multiple subjects with two multi-channel magnetometers located on opposite sides of the head. Signal processing techniques enabled by multi-channel measurements were used to improve signal quality. This is the first demonstration of multi-sensor atomic magnetometer magnetoencephalography and provides a framework for developing a non-cryogenic, whole-head magnetoencephalography array for source localization. PMID:23939051

  1. Towards operational multisensor registration

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.

    1991-01-01

    To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.

  2. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  3. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  4. The Applicability of Incoherent Array Processing to IMS Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.

    2014-03-01

    The seismic arrays of the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) are highly diverse in size and configuration, with apertures ranging from under 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high-frequency phases lacking coherence between sensors. Pipeline detection algorithms often miss such phases, since they only consider frequencies low enough to allow coherent array processing, and phases that are detected are often attributed qualitatively incorrect backazimuth and slowness estimates. This can result in missed events, due to either a lack of contributing phases or by corruption of event hypotheses by spurious detections. It has been demonstrated previously that continuous spectral estimation can both detect and estimate phases on the largest aperture arrays, with arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity, as is the case for classical f-k analysis, and the ability to estimate slowness vectors requires sufficiently large inter-sensor distances to resolve time-delays between pulses with a period of the order 4-5 s. Spectrogram beampacking works well on five IMS arrays with apertures over 20 km (NOA, AKASG, YKA, WRA, and KURK) without additional post-processing. Seven arrays with 10-20 km aperture (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 s period signal. Even for medium aperture arrays which can provide high-quality coherent slowness estimates, a complementary spectrogram beampacking procedure could act as a quality control by providing non-aliased estimates when the coherent slowness grids display

  5. Characterizing the Propagation of Uterine Electrophysiological Signals Recorded with a Multi-Sensor Abdominal Array in Term Pregnancies.

    PubMed

    Escalona-Vargas, Diana; Govindan, Rathinaswamy B; Furdea, Adrian; Murphy, Pam; Lowery, Curtis L; Eswaran, Hari

    2015-01-01

    The objective of this study was to quantify the number of segments that have contractile activity and determine the propagation speed from uterine electrophysiological signals recorded over the abdomen. The uterine magnetomyographic (MMG) signals were recorded with a 151 channel SARA (SQUID Array for Reproductive Assessment) system from 36 pregnant women between 37 and 40 weeks of gestational age. The MMG signals were scored and segments were classified based on presence of uterine contractile burst activity. The sensor space was then split into four quadrants and in each quadrant signal strength at each sample was calculated using center-of-gravity (COG). To this end, the cross-correlation analysis of the COG was performed to calculate the delay between pairwise combinations of quadrants. The relationship in propagation across the quadrants was quantified and propagation speeds were calculated from the delays. MMG recordings were successfully processed from 25 subjects and the average values of propagation speeds ranged from 1.3-9.5 cm/s, which was within the physiological range. The propagation was observed between both vertical and horizontal quadrants confirming multidirectional propagation. After the multiple pairwise test (99% CI), significant differences in speeds can be observed between certain vertical or horizontal combinations and the crossed pair combinations. The number of segments containing contractile activity in any given quadrant pair with a detectable delay was significantly higher in the lower abdominal pairwise combination as compared to all others. The quadrant-based approach using MMG signals provided us with high spatial-temporal information of the uterine contractile activity and will help us in the future to optimize abdominal electromyographic (EMG) recordings that are practical in a clinical setting. PMID:26505624

  6. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  7. Sensor array processing for random inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Ringelstein, Joerg; Gershman, Alex B.; Boehme, Johann F.

    1999-11-01

    The performances of high-resolution array processing methods are known to degrade in random inhomogeneous media because the amplitude and phase of each wavefront tend to fluctuate and to loose their coherence between array sensors. As a result, in the presence of such a multiplicative noise, the conventional coherent wavefront model becomes inapplicable. Such a type of degradation may be especially strong for large aperture arrays. Below, we develop new high-resolution covariance matching (CM) techniques with an improved robustness against multiplicative noise and related coherence losses. Using a few unrestrictive physics-based assumptions on the environment, we show that reliable algorithms can be developed which take into account possible coherence losses. Computer simulation results and real sonar data processing results are presented. These results demonstrate drastic improvements achieved by our approach as compared with conventional high- resolution array processing techniques.

  8. Barrow real-time sea ice mass balance data: ingestion, processing, dissemination and archival of multi-sensor data

    NASA Astrophysics Data System (ADS)

    Grimes, J.; Mahoney, A. R.; Heinrichs, T. A.; Eicken, H.

    2012-12-01

    Sensor data can be highly variable in nature and also varied depending on the physical quantity being observed, sensor hardware and sampling parameters. The sea ice mass balance site (MBS) operated in Barrow by the University of Alaska Fairbanks (http://seaice.alaska.edu/gi/observatories/barrow_sealevel) is a multisensor platform consisting of a thermistor string, air and water temperature sensors, acoustic altimeters above and below the ice and a humidity sensor. Each sensor has a unique specification and configuration. The data from multiple sensors are combined to generate sea ice data products. For example, ice thickness is calculated from the positions of the upper and lower ice surfaces, which are determined using data from downward-looking and upward-looking acoustic altimeters above and below the ice, respectively. As a data clearinghouse, the Geographic Information Network of Alaska (GINA) processes real time data from many sources, including the Barrow MBS. Doing so requires a system that is easy to use, yet also offers the flexibility to handle data from multisensor observing platforms. In the case of the Barrow MBS, the metadata system needs to accommodate the addition of new and retirement of old sensors from year to year as well as instrument configuration changes caused by, for example, spring melt or inquisitive polar bears. We also require ease of use for both administrators and end users. Here we present the data and processing steps of using sensor data system powered by the NoSQL storage engine, MongoDB. The system has been developed to ingest, process, disseminate and archive data from the Barrow MBS. Storing sensor data in a generalized format, from many different sources, is a challenging task, especially for traditional SQL databases with a set schema. MongoDB is a NoSQL (not only SQL) database that does not require a fixed schema. There are several advantages using this model over the traditional relational database management system (RDBMS

  9. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, Param H.; Hunt, Arlon J.

    1986-01-01

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO.sub.2, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO.sub.2, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40.degree. C. instead of at about 270.degree. C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementry particles or cosmic rays.

  10. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, P.H.; Hunt, A.J.

    1985-09-04

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO/sub 2/, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO/sub 2/, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40/sup 0/C instead of at about 270/sup 0/C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementary particles or cosmic rays.

  11. Semiotic foundation for multisensor-multilook fusion

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.

    1998-07-01

    This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.

  12. Image restoration in multisensor missile seeker environments for design of intelligent integrated processing architectures

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Pang, Ho-Yuen; Amphay, Sengvieng A.; Sundstrom, Bryce M.

    1997-10-01

    Two major factors that could limit successful implementations of image restoration and superresolution algorithms in missile seeker applications are, (i) lack of accurate knowledge of sensor point spread function (PSF) parameters, and (ii) noise-induced artifacts in the restoration process. The robustness properties of a recently developed blind iterative Maximum Likelihood (ML) restoration algorithm to inaccuracies in sensor PSF are established in this paper. Two modifications to this algorithm that successfully equip it to suppress artifacts resulting from the presence of high frequency noise components are outlined. Performance evaluation studies with 1D and 2D signals are included to demonstrate that these algorithms have superresolution capabilities while possessing also attractive robustness and artifact suppression properties. The algorithms developed here hence contribute to efficient designs of intelligent integrated processing architectures for smart weapon applications.

  13. Intensification of hydrological process in permafrost regions and correlation with ecological processes from multi-sensor satellite observations and in-situ measurements

    NASA Astrophysics Data System (ADS)

    Tong, J.; Velicogna, I.; Zhang, T.; Kimball, J. S.; Rawlins, M. A.; McDonald, K. C.

    2010-12-01

    Global warming is driving dramatic changes in arctic terrestrial hydrological and ecological processes. We apply synergistic multi-sensor satellite observations, in-situ measurements and hydro-ecological process modeling to quantify recent changes in terrestrial water storage, plant available moisture and biophysical controls on vegetation productivity and ET for boreal-Arctic biomes in Eurasia. Using GRACE measurements of time variable gravity, for the period between 2002 and 2010 in conjunction with precipitation (P) and evapotraspiration (ET) from a range of GCM and hydrological model outputs, runoff measurements and snow water equivalent (SWE) measurements, we analyze the changes in the water cycle in Eurasia. During this period the water cycle appears to have intensified. We observe a water storage increase in the Lena and Yenisei basins, in areas characterized by permafrost coverage. Using observations of active layer changes in these regions in combination with the GRACE data we interpret this water storage increase as a combination of deepening of the soil active layer and formation of close talik with associated impact on the permafrost regime in those regions. We present result from spatial and temporal correlation analyses of GRACE derived water storage changes, Net Primary Production (NPP) and NDVI from MODIS, precipitation and evapotraspiration, length of the growing season, fractional open water cover, snow cover, permafrost, and land cover. We interpret these correlations in terms of mutual feedbacks between water cycle variability, ecosystem change and permafrost regime.

  14. Plenoptic processing methods for distributed camera arrays

    NASA Astrophysics Data System (ADS)

    Boyle, Frank A.; Yancey, Jerry W.; Maleh, Ray; Deignan, Paul

    2011-05-01

    Recent advances in digital photography have enabled the development and demonstration of plenoptic cameras with impressive capabilities. They function by recording sub-aperture images that can be combined to re-focus images or to generate stereoscopic pairs. Plenoptic methods are being explored for fusing images from distributed arrays of cameras, with a view toward applications in which hardware resources are limited (e.g. size, weight, power constraints). Through computer simulation and experimental studies, the influences of non-idealities such as camera position uncertainty are being considered. Component image rescaling and balancing methods are being explored to compensate. Of interest is the impact on precision passive ranging and super-resolution. In a preliminary experiment, a set of images from a camera array was recorded and merged to form a 3D representation of a scene. Conventional plenoptic refocusing was demonstrated and techniques were explored for balancing the images. Nonlinear methods were explored for combining the images limited the ghosting caused by sub-sampling. Plenoptic processing was explored as a means for determining 3D information from airborne video. Successive frames were processed as camera array elements to extract the heights of structures. Practical means were considered for rendering the 3D information in color.

  15. Gallium arsenide processing for gate array logic

    NASA Technical Reports Server (NTRS)

    Cole, Eric D.

    1989-01-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  16. Optical implementation of systolic array processing

    NASA Technical Reports Server (NTRS)

    Caulfield, H. J.; Rhodes, W. T.; Foster, M. J.; Horvitz, S.

    1981-01-01

    Algorithms for matrix vector multiplication are implemented using acousto-optic cells for multiplication and input data transfer and using charge coupled devices detector arrays for accumulation and output of the results. No two dimensional matrix mask is required; matrix changes are implemented electronically. A system for multiplying a 50 component nonnegative real vector by a 50 by 50 nonnegative real matrix is described. Modifications for bipolar real and complex valued processing are possible, as are extensions to matrix-matrix multiplication and multiplication of a vector by multiple matrices.

  17. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System.

    PubMed

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-01-01

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects. PMID:27517935

  18. Hierarchical Robot Control In A Multisensor Environment

    NASA Astrophysics Data System (ADS)

    Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari

    1987-03-01

    Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas

  19. Array signal processing in the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  20. Implementation and use of systolic array processes

    SciTech Connect

    Kung, H.T.

    1983-01-01

    Major effort are now underway to use systolic array processors in large, real-life applications. The author examines various implementation issues and alternatives, the latter from the viewpoints of flexibility and interconnection topologies. He then identifies some work that is essential to the eventual wide use of systolic array processors, such as the development of building blocks, system support and suitable algorithms. 24 references.

  1. Array enhanced stochastic resonance: Implications for signal processing

    SciTech Connect

    Inchiosa, M.E.; Bulsara, A.R.; Lindner, J.F.; Meadows, B.K.; Ditto, W.L.

    1996-06-01

    In computer simulations, we enhance the response of a {open_quote}{open_quote}stochastic resonator{close_quote}{close_quote} by coupling it into an array of identical resonators. We relate this array enhanced stochastic resonance (AESR) to the global spatiotemporal dynamics of the array and show how noise and coupling cooperate to organize spatial order, temporal periodicity, and peak output signal-to-noise ratio. We consider the application of AESR to signal processing. {copyright} {ital 1996 American Institute of Physics.}

  2. Inertial/multisensor navigation

    NASA Technical Reports Server (NTRS)

    Alikiotis, Dimitri

    1987-01-01

    A Multisensor Navigation System as proposed by the Ohio University Avionics Engineering Center is illustrated. The proposed system incorporates radio (Lorac-C), satellite (Global Positioning System) and an inertial navigation system (INS). The inertial part of the system will be of a low grade since the INS will be used primarily for filtering the GPS data and for short term stability. Loran-C and GPS will be used for long term stability.

  3. Speech intelligibility enhancement using hearing-aid array processing.

    PubMed

    Saunders, G H; Kates, J M

    1997-09-01

    Microphone arrays can improve speech recognition in the noise for hearing-impaired listeners by suppressing interference coming from other than desired signal direction. In a previous paper [J. M. Kates and M. R. Weiss, J. Acoust. Soc. Am. 99, 3138-3148 (1996)], several array-processing techniques were evaluated in two rooms using the AI-weighted array gain as the performance metric. The array consisted of five omnidirectional microphones having uniform 2.5-cm spacing, oriented in the endfire direction. In this paper, the speech intelligibility for two of the array processing techniques, delay-and-sum beamforming and superdirective processing, is evaluated for a group of hearing-impaired subjects. Speech intelligibility was measured using the speech reception threshold (SRT) for spondees and speech intelligibility rating (SIR) for sentence materials. The array performance is compared with that for a single omnidirectional microphone and a single directional microphone having a cardioid response pattern. The SRT and SIR results show that the superdirective array processing was the most effective, followed by the cardioid microphone, the array using delay-and-sum beamforming, and the single omnidirectional microphone. The relative processing ratings do not appear to be strongly affected by the size of the room, and the SRT values determined using isolated spondees are similar to the SIR values produced from continuous discourse. PMID:9301060

  4. The application of systolic arrays to radar signal processing

    NASA Astrophysics Data System (ADS)

    Spearman, R.; Spracklen, C. T.; Miles, J. H.

    The design of a systolic array processor radar system is examined, and its performance is compared to that of a conventional radar processor. It is shown how systolic arrays can be used to replace the boards of high speed logic normally associated with a high performance radar and to implement all of the normal processing functions associated with such a system. Multifunctional systolic arrays are presented that have the flexibility associated with a general purpose digital processor but the speed associated with fixed function logic arrays.

  5. Fabrication of Nanohole Array via Nanodot Array Using Simple Self-Assembly Process of Diblock Copolymer

    NASA Astrophysics Data System (ADS)

    Matsuyama, Tsuyoshi; Kawata, Yoshimasa

    2007-06-01

    We present a simple self-assembly process for fabricating a nanohole array via a nanodot array on a glass substrate by dripping ethanol onto the nanodot array. It is found that well-aligned arrays of nanoholes as well as nanodots are formed on the whole surface of the glass. A dot is transformed into a hole, and the alignment of the nanodots strongly reflects that of the nanoholes. We find that the change in the depth of holes agrees well with the change in the surface energy with the ethanol concentration in the aqueous solution. We believe that the interfacial energy between the nanodots and the dripped ethanol causes the transformation from nanodots into nanoholes. The nanohole arrays are directly applicable to molds for nanopatterned media used in high-density near-field optical data storage. The bit data can be stored and read out using probes with small apertures.

  6. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  7. Ghost artifact cancellation using phased array processing.

    PubMed

    Kellman, P; McVeigh, E R

    2001-08-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638

  8. Motion compensation for adaptive horizontal line array processing

    NASA Astrophysics Data System (ADS)

    Yang, T. C.

    2003-01-01

    Large aperture horizontal line arrays have small resolution cells and can be used to separate a target signal from an interference signal by array beamforming. High-resolution adaptive array processing can be used to place a null at the interference signal so that the array gain can be much higher than that of conventional beamforming. But these nice features are significantly degraded by the source motion, which reduces the time period under which the environment can be considered stationary from the array processing point of view. For adaptive array processing, a large number of data samples are generally required to minimize the variance of the cross-spectral density, or the covariance matrix, between the array elements. For a moving source and interference, the penalty of integrating over a large number of samples is the spread of signal and interference energy to more than one or two eigenvalues. The signal and interference are no longer clearly identified by the eigenvectors and, consequently, the ability to suppress the interference suffers. We show in this paper that the effect of source motion can be compensated for the (signal) beam covariance matrix, thus allowing integration over a large number of data samples without loss in the signal beam power. We employ an equivalent of a rotating coordinate frame to track the signal bearing change and use the waveguide invariant theory to compensate the signal range change by frequency shifting.

  9. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  10. Multi-Sensor Distributive On-Line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Technical Reports Server (NTRS)

    Teng, William; Berrick, Steve; Leptuokh, Gregory; Liu, Zhong; Rui, Hualan; Pham, Long; Shen, Suhung; Zhu, Tong

    2004-01-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM On-line Visualization and Analysis System precipitation and other satellite data products and services. AIS outputs will be ,integrated into existing operational decision support system for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that ,allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical ,parameters, area of interest, and time period; and the system generates an output ,on screen in a matter of seconds.

  11. A smart multisensor approach to assist blind people in specific urban navigation tasks.

    PubMed

    Ando, B

    2008-12-01

    Visually impaired people are often discouraged in using electronic aids due to complexity of operation, large amount of training, nonoptimized degree of information provided to the user, and high cost. In this paper, a new multisensor architecture is discussed, which would help blind people to perform urban mobility tasks. The device is based on a multisensor strategy and adopts smart signal processing. PMID:19144591

  12. Real time speech recognition on a distributed digital processing array

    NASA Astrophysics Data System (ADS)

    Simpson, P.; Roberts, J. B. G.

    1983-08-01

    A compact digital signal processor based on the architecture of the ICL Distributed Array Processor (DAP) is under development for MOD applications in Radar, ESM, Image Processing, etc. This Memorandum examines its applicability to speech recognition. In such a distributed processor, optimum mapping of the problem on to the array of processors is vital for efficiency. Three mappings of a dynamic time warping algorithm for isolated word recognition are examined, leading to a feasbile real time capability for continuous speech processing. The compatibility found between dynamic programming methods and this class of machine enlarges the scope of signal processing algorithms foreseen as amenable to parallel processing.

  13. Sonar array processing borrows from geophysics

    SciTech Connect

    Chen, K.

    1989-09-01

    The author reports a recent advance in sonar signal processing that has potential military application. It improves signal extraction by modifying a technique devised by a geophysicist. Sonar signal processing is used to track submarine and surface targets, such as aircraft carriers, oil tankers, and, in commercial applications, schools of fish or sunken treasure. Similar signal-processing techniques help radio astronomers track galaxies, physicians see images of the body interior, and geophysicists map the ocean floor or find oil. This hydrid technique, applied in an experimental system, can help resolve strong signals as well as weak ones in the same step.

  14. Experimental investigation of the ribbon-array ablation process

    SciTech Connect

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-03-15

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 Multiplication-Sign 10{sup 7} cm/s.

  15. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  16. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  17. Dimpled ball grid array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    A 472 dimpled ball grid array (D-BGA) package has not been used in past space flight environments, therefore it was necessary to develop a process that would yield robust and reliable solder joints. The process developing assembly, inspection and rework techniques, were verified by conducting environmental tests. Since the 472 D-BGA packages passed the above environmental tests within the specifications, the process was successfully developed for space flight electronics.

  18. Parallel Processing of Large Scale Microphone Arrays for Sound Capture

    NASA Astrophysics Data System (ADS)

    Jan, Ea-Ee.

    1995-01-01

    Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be

  19. Multisensor configurations for early sniper detection

    NASA Astrophysics Data System (ADS)

    Lindgren, D.; Bank, D.; Carlsson, L.; Dulski, R.; Duval, Y.; Fournier, G.; Grasser, R.; Habberstad, H.; Jacquelard, C.; Kastek, M.; Otterlei, R.; Piau, G.-P.; Pierre, F.; Renhorn, I.; Sjöqvist, L.; Steinvall, O.; Trzaskawka, P.

    2011-11-01

    This contribution reports some of the fusion results from the EDA SNIPOD project, where different multisensor configurations for sniper detection and localization have been studied. A project aim has been to cover the whole time line from sniper transport and establishment to shot. To do so, different optical sensors with and without laser illumination have been tested, as well as acoustic arrays and solid state projectile radar. A sensor fusion node collects detections and background statistics from all sensors and employs hypothesis testing and multisensor estimation programs to produce unified and reliable sniper alarms and accurate sniper localizations. Operator interfaces that connect to the fusion node should be able to support both sniper countermeasures and the guidance of personnel to safety. Although the integrated platform has not been actually built, sensors have been evaluated at common field trials with military ammunitions in the caliber range 5.56 to 12.7 mm, and at sniper distances up to 900 m. It is concluded that integrating complementary sensors for pre- and postshot sniper detection in a common system with automatic detection and fusion will give superior performance, compared to stand alone sensors. A practical system is most likely designed with a cost effective subset of available complementary sensors.

  20. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  1. IN-SITU IONIC CHEMICAL ANALYSIS OF FRESH WATER VIA A NOVEL COMBINED MULTI-SENSOR / SIGNAL PROCESSING ARCHITECTURE

    NASA Astrophysics Data System (ADS)

    Mueller, A. V.; Hemond, H.

    2009-12-01

    The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing

  2. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach. PMID:24116535

  3. Utilizing Multi-Sensor Data Products and high-resolution flood model in Analyzing North African Hydrological Processes

    NASA Astrophysics Data System (ADS)

    Thengumthara, K.; Policelli, F.; Habib, S.; David, J. L.; Melocik, K. A.; Huffman, G. J.; Anderson, M. C.; Ali, A. B.; Bacha, S.

    2013-12-01

    North Africa is an arid region characterized by isolated extreme events such as floods and droughts. Our present understanding of hydrological processes over North Africa is limited due to low rainfall, mixed response of evaporation to temperature and soil moisture gradients, and lack of high-resolution ground measurements. Remote sensing is an excellent way to obtain near real- time data of high spatial and temporal resolution. Satellite estimates of rainfall and evapotranspiration (ET) have uncertainties due to topography, land-sea contrast, complex weather, and climate variability for high-elevated regions. Generally for arid regions, the satellite precipitation instruments are sensitive to soil moisture and land surface geometry. This study analyzes different components of hydrological processes over North Africa based on remote sensing data such as precipitation (NASA-TMPA, CMORPH and PERSIANN), evaporation (ALEXI and MODIS), and elevation (SRTM) along with ground measurements and model simulations. Here we use the Coupled Routing and Excess STorage (CREST) hydrological model-version 2.0, which was originally developed by NASA-GSFC and the University of Oklahoma [Wang J et al., 2011]. The model is driven by real time TMPA and climatological PET, interpolated to model grids. The flexible simulation and calibration enables the model to provide high-resolution runoff and water depth at each time step. Our study mainly focuses on two major basins such as Medjerda over Tunisia and the Sebou basin of Morocco. Case studies of flood events over North Africa were analyzed based on CREST model simulations with respect to ground measurements. The floods are mainly modulated by rainfall associated with synoptic frontal and tropical plumes and orographic mesoscale systems. Occurrences of peak floods simulated by CREST are comparable with diagnostics such as vertically integrated moisture convergence, stratiform and convective precipitation from ECMWF reanalysis. These were

  4. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  5. Design and programming of systolic array cells for signal processing

    SciTech Connect

    Smith, R.A.W.

    1989-01-01

    This thesis presents a new methodology for the design, simulation, and programming of systolic arrays in which the algorithms and architecture are simultaneously optimized. The algorithms determine the initial architecture, and simulation is used to optimize the architecture. The simulator provides a register-transfer level model of a complete systolic array computation. To establish the validity of this design methodology two novel programmable systolic array cells were designed and programmed. The cells were targeted for applications in high-speed signal processing and associated matrix computations. A two-chip programmable systolic array cell using a 16-bit multiplier-accumulator chip and a semi-custom VLSI controller chip was designed and fabricated. A low chip count allows large arrays to be constructed, but the cell is flexible enough to be a building-block for either one- or two-dimensional systolic arrays. Another more flexible and powerful cell using a 32-bit floating-point processor and a second VLSI controller chip was also designed. It contains several architectural features that are unique in a systolic array cell: (1) each instruction is 32 bits, yet all resources can be updated every cycle, (2) two on-chip interchangeable memories are used, and (3) one input port can be used as either a global or local port. The key issues involved in programming the cells are analyzed in detail. A set of modules is developed which can be used to construct large programs in an effective manner. The utility of this programming approach is demonstrated with several important examples.

  6. Large-Array Signal Processing for Deep-Space Applications

    NASA Astrophysics Data System (ADS)

    Lee, C. H.; Vilnrotter, V.; Satorius, E.; Ye, Z.; Fort, D.; Cheung, K.-M.

    2002-04-01

    This article develops the mathematical models needed to describe the key issues in using an array of antennas for receiving spacecraft signals for DSN applications. The detrimental effects of nearby interfering sources, such as other spacecraft transmissions or natural radio sources within the array's field of view, on signal-to noise ratio (SNR) are determined, atmospheric effects relevant to the arraying problem developed, and two classes of algorithms (multiple signal classification (MUSIC) plus beam forming, and an eigen-based solution) capable of phasing up the array with maximized SNR in the presence of realistic disturbances are evaluated. It is shown that, when convolutionally encoded binary-phase shift keying (BPSK) data modulation is employed on the spacecraft signal, previously developed data pre-processing techniques that partially reconstruct the carrier can be of great benefit to array performance, particularly when strong interfering sources are present. Since this article is concerned mainly with demonstrating the required capabilities for operation under realistic conditions, no attempt has been made to reduce algorithm complexity; the design and evaluation of less complex algorithms with similar capabilities will be addressed in a future article. The performances of the candidate algorithms discussed in this article have been evaluated in terms of the number of symbols needed to achieve a given level of combining loss for different numbers of array elements, and compared on this common basis. It is shown that even the best algorithm requires approximately 25,000 symbols to achieve a combining loss of less than 0.5 dB when 128 antenna elements are employed, but generally 50,000 or more symbols are needed. This is not a serious impediment to successful arraying with high data-rate transmission, but may be of some concern with missions exploring near the edge of our solar system or beyond, where lower data rates may be required.

  7. Application of Seismic Array Processing to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Meng, L.; Allen, R. M.; Ampuero, J. P.

    2013-12-01

    Earthquake early warning (EEW) systems that can issue warnings prior to the arrival of strong ground shaking during an earthquake are essential in mitigating seismic hazard. Many of the currently operating EEW systems work on the basis of empirical magnitude-amplitude/frequency scaling relations for a point source. This approach is of limited effectiveness for large events, such as the 2011 Tohoku-Oki earthquake, for which ignoring finite source effects may result in underestimation of the magnitude. Here, we explore the concept of characterizing rupture dimensions in real time for EEW using clusters of dense low-cost accelerometers located near active faults. Back tracing the waveforms recorded by such arrays allows the estimation of the earthquake rupture size, duration and directivity in real-time, which enables the EEW of M > 7 earthquakes. The concept is demonstrated with the 2004 Parkfield earthquake, one of the few big events (M>6) that have been recorded by a local small-scale seismic array (UPSAR array, Fletcher et al, 2006). We first test the approach against synthetic rupture scenarios constructed by superposition of empirical Green's functions. We find it important to correct for the bias in back azimuth induced by dipping structures beneath the array. We implemented the proposed methodology to the mainshock in a simulated real-time environment. After calibrating the dipping-layer effect with data from smaller events, we obtained an estimated rupture length of 9 km, consistent with the distance between the two main high frequency subevents identified by back-projection using all local stations (Allman and Shearer, 2007). We proposed to deploy small-scale arrays every 30 km along the San Andreas Fault. The array processing is performed in local processing centers at each array. The output is compared with finite fault solutions based on real-time GPS system and then incorporated into the standard ElarmS system. The optimal aperture and array geometry is

  8. Distributed multisensor fusion for machine condition monitoring fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xue; Zhao, Guohua; Xie, Xin

    2001-09-01

    This paper presents a new general framework for multisensor fusion based on a distributed detection. Parallel processing and distributed multisensor fusion, as rapidly emerging and promising technologies, provides powerful tools for solving this difficult problem, The distribution and parallelism of proposing and confirming of hypothesis in condition and diagnostic is prosed. A combination serial and parallel reconfiguration of n sensors for decision fusion is analyzed. It shows the result for a real-time parallel distributed complex machine condition monitor and fault diagnostic system.

  9. Multisensor tracking by cooperative processors

    NASA Astrophysics Data System (ADS)

    Mallaina, Eduardo F.; Cernuschi Frias, Bruno

    2004-02-01

    Exploiting a new distributed cooperative processing scheme where multiple processors cooperate in finding a global minimum, we have developed a new efficient maximum likelihood (ML) based calculation method for multitarget motion analysis under a fixed networked multisensor environment. The Track estimation of targets from sensor is a crucial issue in active dynamic scene understanding. Multitarget motion analysis, where there are multiple moving targets and multiple fixed sensors which only measure bearings of the targets, is to associate targets and sensor data, and estimate target tracks based on that association. This is NP-hard problem to obtain the optimal solution, as the method easily gets trapped in one of local optima. We applied the decentralized cooperative search technique to this problem, and proved our method effective. The method uses more than one processor, each of which has its own partial search space, searching multiple possibilities in parallel. This paper shows the current status of our research, and presents two prototypes of cooperative multi-agent systems for extended multi-target motion analysis.

  10. Processing difficulties and instability of carbohydrate microneedle arrays

    PubMed Central

    Donnelly, Ryan F.; Morrow, Desmond I.J.; Singh, Thakur R.R.; Migalska, Katarzyna; McCarron, Paul A.; O’Mahony, Conor; Woolfson, A. David

    2010-01-01

    Background A number of reports have suggested that many of the problems currently associated with the use of microneedle (MN) arrays for transdermal drug delivery could be addressed by using drug-loaded MN arrays prepared by moulding hot melts of carbohydrate materials. Methods In this study, we explored the processing, handling, and storage of MN arrays prepared from galactose with a view to clinical application. Results Galactose required a high processing temperature (160°C), and molten galactose was difficult to work with. Substantial losses of the model drugs 5-aminolevulinic acid (ALA) and bovine serum albumin were incurred during processing. While relatively small forces caused significant reductions in MN height when applied to an aluminium block, this was not observed during their relatively facile insertion into heat-stripped epidermis. Drug release experiments using ALA-loaded MN arrays revealed that less than 0.05% of the total drug loading was released across a model silicone membrane. Similarly, only low amounts of ALA (approximately 0.13%) and undetectable amounts of bovine serum albumin were delivered when galactose arrays were combined with aqueous vehicles. Microscopic inspection of the membrane following release studies revealed that no holes could be observed in the membrane, indicating that the partially dissolved galactose sealed the MN-induced holes, thus limiting drug delivery. Indeed, depth penetration studies into excised porcine skin revealed that there was no significant increase in ALA delivery using galactose MN arrays, compared to control (P value < 0.05). Galactose MNs were unstable at ambient relative humidities and became adhesive. Conclusion The processing difficulties and instability encountered in this study are likely to preclude successful clinical application of carbohydrate MNs. The findings of this study are of particular importance to those in the pharmaceutical industry involved in the design and formulation of

  11. Multisensor Fire Observations

    NASA Technical Reports Server (NTRS)

    Boquist, C.

    2004-01-01

    This DVD includes animations of multisensor fire observations from the following satellite sources: Landsat, GOES, TOMS, Terra, QuikSCAT, and TRMM. Some of the animations are included in multiple versions of a short video presentation on the DVD which focuses on the Hayman, Rodeo-Chediski, and Biscuit fires during the 2002 North American fire season. In one version of the presentation, MODIS, TRMM, GOES, and QuikSCAT data are incorporated into the animations of these wildfires. These data products provided rain, wind, cloud, and aerosol data on the fires, and monitored the smoke and destruction created by them. Another presentation on the DVD consists of a panel discussion, in which experts from academia, NASA, and the U.S. Forest Service answer questions on the role of NASA in fighting forest fires, the role of the Terra satellite and its instruments, including the Moderate Resolution Imaging Spectroradiometer (MODIS), in fire fighting decision making, and the role of fire in the Earth's climate. The third section of the DVD features several animations of fires over the years 2001-2003, including animations of global and North American fires, and specific fires from 2003 in California, Washington, Montana, and Arizona.

  12. Multisensor image cueing (MUSIC)

    NASA Astrophysics Data System (ADS)

    Rodvold, David; Patterson, Tim J.

    2002-07-01

    There have been many years of research and development in the Automatic Target Recognition (ATR) community. This development has resulted in numerous algorithms to perform target detection automatically. The morphing of the ATR acronym to Aided Target Recognition provides a succinct commentary regarding the success of the automatic target recognition research. Now that the goal is aided recognition, many of the algorithms which were not able to provide autonomous recognition may now provide valuable assistance in cueing a human analyst where to look in the images under consideration. This paper describes the MUSIC system being developed for the US Air Force to provide multisensor image cueing. The tool works across multiple image phenomenologies and fuses the evidence across the set of available imagery. MUSIC is designed to work with a wide variety of sensors and platforms, and provide cueing to an image analyst in an information-rich environment. The paper concentrates on the current integration of algorithms into an extensible infrastructure to allow cueing in multiple image types.

  13. Multi-sensor analysis of urban ecosystems

    USGS Publications Warehouse

    Gallo, K.; Ji, L.

    2004-01-01

    This study examines the synthesis of multiple space-based sensors to characterize the urban environment Single scene data (e.g., ASTER visible and near-IR surface reflectance, and land surface temperature data), multi-temporal data (e.g., one year of 16-day MODIS and AVHRR vegetation index data), and DMSP-OLS nighttime light data acquired in the early 1990s and 2000 were evaluated for urban ecosystem analysis. The advantages of a multi-sensor approach for the analysis of urban ecosystem processes are discussed.

  14. Fusion of multisensor, multispectral, and defocused images

    NASA Astrophysics Data System (ADS)

    Shahida, Mohd.; Guptab, Sumana

    2005-10-01

    Fusion is basically extraction of best of inputs and conveying it to the output. In this paper, we present an image fusion technique using the concept of perceptual information across the bands. This algorithm is relevant to visual sensitivity and tested by merging multisensor, multispectral and Defoucused images. Fusion is achieved through the formation of one fused pyramid using the DWT coefficients from the decomposed pyramids of the source images. The fused image is obtained through conventional discrete wavelet transform (DWT) reconstruction process. Results obtained using the proposed method show a significant reduction of distortion artifacts and a large preservation of spectral information.

  15. Application of Multi-Sensor Information Fusion Method Based on Rough Sets and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Xue, Jinxue; Wang, Guohu; Wang, Xiaoqiang; Cui, Fengkui

    In order to improve the precision and date processing speed of multi-sensor information fusion, a kind of multi-sensor data fusion process algorithm has been studied in this paper. First, based on rough set theory (RS) to attribute reduction the parameter set, we use the advantages of rough set theory in dealing with large amount of data to eliminate redundant information. Then, the data can be trained and classified by Support Vector Machine (SYM). Experimental results showed that this method can improve the speed and accuracy of multi-sensor fusion system.

  16. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable. PMID:26737215

  17. Enhancement of data analysis through multisensor data fusion technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper focuses on application of multisensor data fusion for high quality data analysis and processing in measurement and instrumentation. A practical, general data fusion scheme is established on the basis of feature extraction and merging of data from multiple sensors. This scheme integrates...

  18. The Multi-sensor Airborne Radiation Survey (MARS) Instrument

    SciTech Connect

    Fast, James E.; Aalseth, Craig E.; Asner, David M.; Bonebrake, Christopher A.; Day, Anthony R.; Dorow, Kevin E.; Fuller, Erin S.; Glasgow, Brian D.; Hossbach, Todd W.; Hyronimus, Brian J.; Jensen, Jeffrey L.; Johnson, Kenneth I.; Jordan, David V.; Morgen, Gerald P.; Morris, Scott J.; Mullen, O Dennis; Myers, Allan W.; Pitts, W. Karl; Rohrer, John S.; Runkle, Robert C.; Seifert, Allen; Shergur, Jason M.; Stave, Sean C.; Tatishvili, Gocha; Thompson, Robert C.; Todd, Lindsay C.; Warren, Glen A.; Willett, Jesse A.; Wood, Lynn S.

    2013-01-11

    The Multi-sensor Airborne Radiation Survey (MARS) project has developed a new single cryostat detector array design for high purity germanium (HPGe) gama ray spectrometers that achieves the high detection efficiency required for stand-off detection and actionable characterization of radiological threats. This approach, we found, is necessary since a high efficiency HPGe detector can only be built as an array due to limitations in growing large germanium crystals. Moreover, the system is ruggedized and shock mounted for use in a variety of field applications, including airborne and maritime operations.

  19. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  20. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  1. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  2. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  3. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    SciTech Connect

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  4. Principles of data-fusion in multi-sensor systems for non-destructive testing

    NASA Astrophysics Data System (ADS)

    Chioclea, Shmuel; Dickstein, Phineas

    2000-05-01

    In recent years, there has been progress in the application of measurement and control systems that engage multi-sensor arrays. Several algorithms and techniques have been developed for the integration of the information obtained from the sensors. The fusion of the data may be complicated due to the fact that each sensor has its own performance characteristics, and because different sensors may detect different physical phenomena. As a result, data fusion turns out to be a multidisciplinary field, which applies principles adopted from other fields such as signal processing, artificial intelligence, statistics, and The Theory of Information. The data fusion machine tries to imitate the human brain, in combining data from numerous sensors and making optimal inferences about the environment. The present paper provides a critical review of data fusion algorithms and techniques and a trenchant summary of the experience gained to date from the several preliminary NDT studies which have been applying multi-sensor data fusion systems. Consequently, this paper provides a list of rules and criteria to be followed in future applications of data fusion to nondestructive testing.

  5. Multisensor classification of sedimentary rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    1988-01-01

    A comparison is made between linear discriminant analysis and supervised classification results based on signatures from the Landsat TM, the Thermal Infrared Multispectral Scanner (TIMS), and airborne SAR, alone and combined into extended spectral signatures for seven sedimentary rock units exposed on the margin of the Wind River Basin, Wyoming. Results from a linear discriminant analysis showed that training-area classification accuracies based on the multisensor data were improved an average of 15 percent over TM alone, 24 percent over TIMS alone, and 46 percent over SAR alone, with similar improvement resulting when supervised multisensor classification maps were compared to supervised, individual sensor classification maps. When training area signatures were used to map spectrally similar materials in an adjacent area, the average classification accuracy improved 19 percent using the multisensor data over TM alone, 2 percent over TIMS alone, and 11 percent over SAR alone. It is concluded that certain sedimentary lithologies may be accurately mapped using a single sensor, but classification of a variety of rock types can be improved using multisensor data sets that are sensitive to different characteristics such as mineralogy and surface roughness.

  6. Flat-plate solar array project. Volume 5: Process development

    NASA Astrophysics Data System (ADS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-10-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  7. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  8. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Flynn, David S.; Wellfare, Michael R.; Richards, Mike; Prestwood, Lee

    1995-06-01

    The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.

  9. Electronic Processing And Advantages Of CMT Focal Plane Arrays

    NASA Astrophysics Data System (ADS)

    Murphy, Kevin S.; Dennis, Peter N.; Bradley, Derek J.

    1990-04-01

    There have been many advances in thermal imaging systems and components in recent years such that an infrared capability is now readily available and accepted in a variety of military and civilian applications. Conventional thermal imagers such as the UK common module imager use a mechanical scanning system to sweep a small array of detectors across the thermal scene to generate a high definition TV compatible output. Although excellent imagery can be obtained from this type of system, there are some inherent disadvantages, amongst which are the need for a high speed line scan mechanism and the fundamental limit in thermal resolution due to the low stare efficiency of the system. With the advent of two dimensional focal plane array detectors, staring array imagers can now be designed and constructed in which the scanning mechanism is removed. Excellent thermal resolution can be obtained from such imagers due to the relatively long stare times. The recent progress in this technology will be discussed in this paper together with a description of the signal processing requirements of this type of imaging system.

  10. ArrayPipe: a flexible processing pipeline for microarray data.

    PubMed

    Hokamp, Karsten; Roche, Fiona M; Acab, Michael; Rousseau, Marc-Etienne; Kuo, Byron; Goode, David; Aeschliman, Dana; Bryan, Jenny; Babiuk, Lorne A; Hancock, Robert E W; Brinkman, Fiona S L

    2004-07-01

    A number of microarray analysis software packages exist already; however, none combines the user-friendly features of a web-based interface with potential ability to analyse multiple arrays at once using flexible analysis steps. The ArrayPipe web server (freely available at www.pathogenomics.ca/arraypipe) allows the automated application of complex analyses to microarray data which can range from single slides to large data sets including replicates and dye-swaps. It handles output from most commonly used quantification software packages for dual-labelled arrays. Application features range from quality assessment of slides through various data visualizations to multi-step analyses including normalization, detection of differentially expressed genes, andcomparison and highlighting of gene lists. A highly customizable action set-up facilitates unrestricted arrangement of functions, which can be stored as action profiles. A unique combination of web-based and command-line functionality enables comfortable configuration of processes that can be repeatedly applied to large data sets in high throughput. The output consists of reports formatted as standard web pages and tab-delimited lists of calculated values that can be inserted into other analysis programs. Additional features, such as web-based spreadsheet functionality, auto-parallelization and password protection make this a powerful tool in microarray research for individuals and large groups alike. PMID:15215429

  11. Multi-sensor electrometer

    NASA Technical Reports Server (NTRS)

    Gompf, Raymond (Inventor); Buehler, Martin C. (Inventor)

    2003-01-01

    An array of triboelectric sensors is used for testing the electrostatic properties of a remote environment. The sensors may be mounted in the heel of a robot arm scoop. To determine the triboelectric properties of a planet surface, the robot arm scoop may be rubbed on the soil of the planet and the triboelectrically developed charge measured. By having an array of sensors, different insulating materials may be measured simultaneously. The insulating materials may be selected so their triboelectric properties cover a desired range. By mounting the sensor on a robot arm scoop, the measurements can be obtained during an unmanned mission.

  12. Welding Process Feedback and Inspection Optimization Using Ultrasonic Phased Arrays

    NASA Astrophysics Data System (ADS)

    Hopkins, D. L.; Neau, G. N.; Davis, W. B.

    2009-03-01

    Measurements performed on friction-stir butt welds in aluminum and resistance spot welds in galvanized steel are used to illustrate how ultrasonic phased arrays can be used to provide high-resolution images of welds. Examples are presented that demonstrate how information extracted from the ultrasonic signals can be used to provide reliable feedback to welding processes. Modeling results are used to demonstrate how weld inspections can be optimized using beam-forming strategies that help overcome the influence of surface conditions and part distortion.

  13. Superconducting infrared detector arrays with integrated processing circuitry

    SciTech Connect

    Osterman, D.P.; Marr, P.; Dang, H.; Yao, C.T.; Radparvar, M. )

    1991-03-01

    This paper reports on thin film Josephson junctions used as infrared detectors' which function by a thermal sensing mechanism. In addition to the potential for high sensitivity to a broad range of optical wavelengths, they are ideally suited for integration with superconducting electronics on a single wafer. A project at HYPRES to develop these arrays is directed along two avenues: maximizing the sensitivity of individual Josephson junction detector/SQUID amplifier units and development of superconducting on-chip processing circuitry - multiplexers and A to D converters.

  14. Microbubble array for on-chip worm processing

    NASA Astrophysics Data System (ADS)

    Xu, Yuhao; Hashmi, Ali; Yu, Gan; Lu, Xiaonan; Kwon, Hyuck-Jin; Chen, Xiaolin; Xu, Jie

    2013-01-01

    We present an acoustic non-contact technique for achieving trapping, enrichment, and manipulation of Caenorhabditis elegans using an array of oscillating microbubbles. We characterize the trapping efficiency and enrichment ratio under various flow conditions, and demonstrate a single-worm manipulation mechanism through temporal actuation of bubbles. The reason for oscillating bubbles being versatile in processing worms in a microfluidic environment is due to the complex interactions among acoustic field, microbubbles, fluid flow, and live animals. We explain the operating mechanisms used in our device by the interplay among secondary acoustic radiation force, drag force, and the propulsive force of C. elegans.

  15. SAR processing with stepped chirps and phased array antennas.

    SciTech Connect

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  16. Solution processed semiconductor alloy nanowire arrays for optoelectronic applications

    NASA Astrophysics Data System (ADS)

    Shimpi, Paresh R.

    In this dissertation, we use ZnO nanowire as a model system to investigate the potential of solution routes for bandgap engineering in semiconductor nanowires. Excitingly, successful Mg-alloying into ZnO nanowire arrays has been achieved using a two-step sequential hydrothermal method at low temperature (<155°C) without using post-annealing process. Evidently, both room temperature and 40 K photoluminescence (PL) spectroscopy revealed enhanced and blue-shifted near-band-edge ultraviolet (NBE UV) emission in the Mg-alloyed ZnO (ZnMgO) nanowire arrays, compared with ZnO nanowires. The specific template of densely packed ZnO nanowires is found to be instrumental in achieving the Mg alloying in low temperature solution process. By optimizing the density of ZnO nanowires and precursor concentration, 8-10 at.% of Mg content has been achieved in ZnMgO nanowires. Post-annealing treatment is conducted in oxygen-rich and oxygen-deficient environment at different temperatures and time durations on silicon and quartz substrates in order to study the structural and optical property evolution in ZnMgO nanowire arrays. Vacuum annealed ZnMgO nanowires on both substrates retained their hexagonal structures and PL results showed the enhanced but red-shifted NBE UV emission compared to ZnO nanowires with visible emission nearly suppressed, suggesting the reduced defects concentration and improvement in crystallinity of the nanowires. On the contrast, for ambient annealed ZnMgO nanowires on silicon substrate, as the annealing temperature increased from 400°C to 900°C, intensity of visible emission peak across blue-green-yellow-red band (˜400-660 nm) increased whereas intensity of NBE UV peak decreased and completely got quenched. This might be due to interface diffusion of oxidized Si (SiOx) and formation of (Zn,Mg)1.7SiO4 epitaxially overcoated around individual ZnMgO nanowire. On the other hand, ambient annealed ZnMgO nanowires grown on quartz showed a ˜6-10 nm blue-shift in

  17. Array Processing in the Cloud: the rasdaman Approach

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Dumitru, Alex

    2015-04-01

    The multi-dimensional array data model is gaining more and more attention when dealing with Big Data challenges in a variety of domains such as climate simulations, geographic information systems, medical imaging or astronomical observations. Solutions provided by classical Big Data tools such as Key-Value Stores and MapReduce, as well as traditional relational databases, proved to be limited in domains associated with multi-dimensional data. This problem has been addressed by the field of array databases, in which systems provide database services for raster data, without imposing limitations on the number of dimensions that a dataset can have. Examples of datasets commonly handled by array databases include 1-dimensional sensor data, 2-D satellite imagery, 3-D x/y/t image time series as well as x/y/z geophysical voxel data, and 4-D x/y/z/t weather data. And this can grow as large as simulations of the whole universe when it comes to astrophysics. rasdaman is a well established array database, which implements many optimizations for dealing with large data volumes and operation complexity. Among those, the latest one is intra-query parallelization support: a network of machines collaborate for answering a single array database query, by dividing it into independent sub-queries sent to different servers. This enables massive processing speed-ups, which promise solutions to research challenges on multi-Petabyte data cubes. There are several correlated factors which influence the speedup that intra-query parallelisation brings: the number of servers, the capabilities of each server, the quality of the network, the availability of the data to the server that needs it in order to compute the result and many more. In the effort of adapting the engine to cloud processing patterns, two main components have been identified: one that handles communication and gathers information about the arrays sitting on every server, and a processing unit responsible with dividing work

  18. Signal processing of microbolometer infrared focal-plane arrays

    NASA Astrophysics Data System (ADS)

    Zhang, Junju; Qian, Yunsheng; Chang, Benkang; Xing, Suxia; Sun, Lianjun

    2005-01-01

    A 320×240-uncooled-microbolometer-based signal processing circuit for infrared focal-plane arrays is presented, and the software designs of this circuit system are also discussed in details. This signal processing circuit comprises such devices as FPGA, D/A, A/D, SRAM, Flash, DSP, etc., among which, FPGA is the crucial part, which realizing the generation of drive signals for infrared focal-plane, nonuniformity correction, image enhancement and video composition. The device of DSP, mainly offering auxiliary functions, carries out communication with PC and loads data when power-up. The phase locked loops (PLL) is used to generate high-quality clocks with low phase dithering and multiple clocks are to used satisfy the demands of focal-plane arrays, A/D, D/A and FPGA. The alternate structure is used to read or write SRAM in order to avoid the contradiction between different modules. FIFO embedded in FPGA not only makes full use of the resources of FPGA but acts as the channel between different modules which have different-speed clocks. What's more, working conditions, working process, physical design and management of the circuit are discussed. In software designing, all the function modules realized by FPGA and DSP devices, which are mentioned in the previous part, are discussed explicitly. Particularly to the nonuniformity correction module, the pipeline structure is designed to improve the working frequency and the ability to realize more complex algorithm.

  19. Room geometry inference based on spherical microphone array eigenbeam processing.

    PubMed

    Mabande, Edwin; Kowalczyk, Konrad; Sun, Haohai; Kellermann, Walter

    2013-10-01

    The knowledge of parameters characterizing an acoustic environment, such as the geometric information about a room, can be used to enhance the performance of several audio applications. In this paper, a novel method for three-dimensional room geometry inference based on robust and high-resolution beamforming techniques for spherical microphone arrays is presented. Unlike other approaches that are based on the measurement and processing of multiple room impulse responses, here, microphone array signal processing techniques for uncontrolled broadband acoustic signals are applied. First, the directions of arrival (DOAs) and time differences of arrival (TDOAs) of the direct signal and room reflections are estimated using high-resolution robust broadband beamforming techniques and cross-correlation analysis. In this context, the main challenges include the low reflected-signal to background-noise power ratio, the low energy of reflected signals relative to the direct signal, and their strong correlation with the direct signal and among each other. Second, the DOA and TDOA information is combined to infer the room geometry using geometric relations. The high accuracy of the proposed room geometry inference technique is confirmed by experimental evaluations based on both simulated and measured data for moderately reverberant rooms. PMID:24116416

  20. Dependence of magnetization process on thickness of Permalloy antidot arrays

    SciTech Connect

    Merazzo, K. J.; Real, R. P. del; Asenjo, A.; Vazquez, M.

    2011-04-01

    Nanohole films or antidot arrays of Permalloy have been prepared by the sputtering of Ni{sub 80}Fe{sub 20} onto anodic alumina membrane templates. The film thickness varies from 5 to 47 nm and the antidot diameters go from 42 to 61 nm, for a hexagonal lattice parameter of 105 nm. For the thinner antidot films (5 and 10 nm thick), magnetic moments locally distribute in a complex manner to reduce the magnetostatic energy, and their mostly reversible magnetization process is ascribed to spin rotations. In the case of the thicker (20 and 47 nm) antidot films, pseudodomain walls appear and the magnetization process is mostly irreversible where hysteresis denotes the effect of nanoholes pinning to wall motion.

  1. Multisensor Fusion for Change Detection

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.

    2005-12-01

    Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach

  2. Automated multisensor registration - Requirements and techniques

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kowk, Ronald; Curlander, John C.; Pang, Shirley S.

    1991-01-01

    The synergistic utilization of data from a suite of remote sensors requires multi-dimensional analysis of the data. Prior to this analysis, processing is required to correct for the systematic geometric distortions characteristic of each sensor, followed by a registration operation to remove any residual offsets. Furthermore, to handle a large volume of data and high data rates, the registration process must be fully automated. A conceptual approach is presented that integrates a variety of registration techniques and selects the candidate algorithm based on certain performance criteria. The performance requirements for an operational algorithm are formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several computational techniques are tested and their performance evaluated using a multisensor test data set assembled from the Landsat TM, Seasat, SIR-B, TIMS, and SPOT sensors. The results are discussed and recommendations for future studies are given.

  3. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the

  4. Analysis of Wide-Band Signals Using Wavelet Array Processing

    NASA Astrophysics Data System (ADS)

    Nisii, V.; Saccorotti, G.

    2005-12-01

    Wavelets transforms allow for precise time-frequency localization in the analysis of non-stationary signals. In wavelet analysis the trade-off between frequency bandwidth and time duration, also known as Heisenberg inequality, is by-passed using a fully scalable modulated window which solves the signal-cutting problem of Windowed Fourier Transform. We propose a new seismic array data processing procedure capable of displaying the localized spatial coherence of the signal in both the time- and frequency-domain, in turn deriving the propagation parameters of the most coherent signals crossing the array. The procedure consists in: a) Wavelet coherence analysis for each station pair of the instruments array, aimed at retrieving the frequency- and time-localisation of coherent signals. To this purpose, we use the normalised wavelet cross- power spectrum, smoothed along the time and scale domains. We calculate different coherence spectra adopting smoothing windows of increasing lengths; a final, robust estimate of the time-frequency localisation of spatially-coherent signals is eventually retrieved from the stack of the individual coherence distribution. This step allows for a quick and reliable signal discrimination: wave groups propagating across the network will manifest as high-coherence patches spanning the corresponding time-scale region. b) Once the signals have been localised in the time and frequency domain,their propagation parameters are estimated using a modified MUSIC (MUltiple SIgnal Characterization) algorithm. We select the MUSIC approach as it demonstrated superior performances in the case of low SNR signals, more plane waves contemporaneously impinging at the array and closely separated sources. The narrow-band Coherent Signal Subspace technique is applied to the complex Continuous Wavelet Transform of multichannel data for improving the singularity of the estimated cross-covariance matrix and the accuracy of the estimated signal eigenvectors. Using

  5. Geophysical Inversion with Adaptive Array Processing of Ambient Noise

    NASA Astrophysics Data System (ADS)

    Traer, James

    2011-12-01

    Land-based seismic observations of microseisms generated during Tropical Storms Ernesto and Florence are dominated by signals in the 0.15--0.5Hz band. Data from seafloor hydrophones in shallow water (70m depth, 130 km off the New Jersey coast) show dominant signals in the gravity-wave frequency band, 0.02--0.18Hz and low amplitudes from 0.18--0.3Hz, suggesting significant opposing wave components necessary for DF microseism generation were negligible at the site. Both storms produced similar spectra, despite differing sizes, suggesting near-coastal shallow water as the dominant region for observed microseism generation. A mathematical explanation for a sign-inversion induced to the passive fathometer response by minimum variance distortionless response (MVDR) beamforming is presented. This shows that, in the region containing the bottom reflection, the MVDR fathometer response is identical to that obtained with conventional processing multiplied by a negative factor. A model is presented for the complete passive fathometer response to ocean surface noise, interfering discrete noise sources, and locally uncorrelated noise in an ideal waveguide. The leading order term of the ocean surface noise produces the cross-correlation of vertical multipaths and yields the depth of sub-bottom reflectors. Discrete noise incident on the array via multipaths give multiple peaks in the fathometer response. These peaks may obscure the sub-bottom reflections but can be attenuated with use of Minimum Variance Distortionless Response (MVDR) steering vectors. A theory is presented for the Signal-to-Noise-Ratio (SNR) for the seabed reflection peak in the passive fathometer response as a function of seabed depth, seabed reflection coefficient, averaging time, bandwidth and spatial directivity of the noise field. The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004, with 0.34s of averaging time. In the 2004

  6. Joint multisensor exploitation for mine detection

    NASA Astrophysics Data System (ADS)

    Beaven, Scott G.; Stocker, Alan D.; Winter, Edwin M.

    2004-09-01

    Robust, timely, and remote detection of mines and minefields is central to both tactical and humanitarian demining efforts, yet remains elusive for single-sensor systems. Here we present an approach to jointly exploit multisensor data for detection of mines from remotely sensed imagery. LWIR, MWIR, laser, multispectral, and radar sensor have been applied individually to the mine detection and each has shown promise for supporting automated detection. However, none of these sources individually provides a full solution for automated mine detection under all expected mine, background and environmental conditions. Under support from Night Vision and Electronic Sensors Directorate (NVESD) we have developed an approach that, through joint exploitation of multiple sensors, improves detection performance over that achieved from a single sensor. In this paper we describe the joint exploitation method, which is based on fundamental detection theoretic principles, demonstrate the strength of the approach on imagery from minefields, and discuss extensions of the method to additional sensing modalities. The approach uses pre-threshold anomaly detector outputs to formulate accurate models for marginal and joint statistics across multiple detection or sensor features. This joint decision space is modeled and decision boundaries are computed from measured statistics. Since the approach adapts the decision criteria based on the measured statistics and no prior target training information is used, it provides a robust multi-algorithm or multisensor detection statistic. Results from the joint exploitation processing using two different imaging sensors over surface mines acquired by NVESD will be presented to illustrate the process. The potential of the approach to incorporate additional sensor sources, such as radar, multispectral and hyperspectral imagery is also illustrated.

  7. Lithography process of micropore array pattern in Si microchannel plates

    NASA Astrophysics Data System (ADS)

    Fan, Linlin; Han, Jun; Liu, Huan; Wang, Yawei

    2015-02-01

    Microchannel plates - MCPs - are the key component of the image intensifier. Compared with the traditional MCPs, the Si MCPs which are fabricated by micro-nanofabrication technologies have a high gain, low noise and high resolution etc. In this paper, the lithography process is studied in the process of fabricating periodic micropore array with 10 um pores and 5 um pitch on Si. The effects of exposure time, reversal bake temperature and development time on the lithography quality are focused. By doing a series of experiments the better result is got: the photoresist film is obtained at a low speed 500/15(rpm/s) and a high speed 4500/50(rpm/s); the soft bake time is 10min at 100°; the exposure time is 10s; the reversal bake time is 80s at 115°; the development time is 55s. By microscope observation and measurement, the pattern is complete and the size of the pattern is accure, it meets the requirement of lithography process for fabricating Si-MCP.

  8. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  9. Multisensor Retrieval of Atmospheric Properties.

    NASA Astrophysics Data System (ADS)

    Boba Stankov, B.

    1998-09-01

    A new method, Multisensor Retrieval of Atmospheric Properties (MRAP), is presented for deriving vertical profiles of atmospheric parameters throughout the troposphere. MRAP integrates measurements from multiple, diverse, remote sensing, and in situ instruments, the combination of which provides better capabilities than any instrument alone. Since remote sensors can deliver measurements automatically and continuously with high time resolution, MRAP provides better coverage than traditional rawinsondes. MRAP's design is flexible, being capable of incorporating measurements from different instruments in order to take advantage of new or developing advanced sensor technology. Furthermore, new or alternative atmospheric parameters for a variety of applications may be easily added as products of MRAP.A combination of passive radiometric, active radar, and in situ observations provide the best temperature and humidity profile measurements. Therefore, MRAP starts with a traditional, radiometer-based, physical retrieval algorithm provided by the International TOVS (TIROS-N Operational Vertical Sounder) Processing Package (ITPP) that constrains the retrieved profiles to agree with brightness temperature measurements. The first-guess profiles required by the ITPP's iterative retrieval algorithm are obtained by using a statistical inversion technique and ground-based remote sensing measurements. Because the individual ground-based remote sensing measurements are usually of sufficiently high quality, the first-guess profiles by themselves provide a satisfactory solution to establish the atmospheric water vapor and temperature state, and the TOVS data are included to provide profiles with better accuracy at higher levels, MRAP provides a physically consistent mechanism for combining the ground- and space-based humidity and temperature profiles.Data that have been used successfully to retrieve humidity and temperature profiles with MRAP are the following: temperature profiles in

  10. Damage Detection in Composite Structures with Wavenumber Array Data Processing

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Guided ultrasonic waves (GUW) have the potential to be an efficient and cost-effective method for rapid damage detection and quantification of large structures. Attractive features include sensitivity to a variety of damage types and the capability of traveling relatively long distances. They have proven to be an efficient approach for crack detection and localization in isotropic materials. However, techniques must be pushed beyond isotropic materials in order to be valid for composite aircraft components. This paper presents our study on GUW propagation and interaction with delamination damage in composite structures using wavenumber array data processing, together with advanced wave propagation simulations. Parallel elastodynamic finite integration technique (EFIT) is used for the example simulations. Multi-dimensional Fourier transform is used to convert time-space wavefield data into frequency-wavenumber domain. Wave propagation in the wavenumber-frequency domain shows clear distinction among the guided wave modes that are present. This allows for extracting a guided wave mode through filtering and reconstruction techniques. Presence of delamination causes spectral change accordingly. Results from 3D CFRP guided wave simulations with delamination damage in flat-plate specimens are used for wave interaction with structural defect study.

  11. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  12. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  13. A bit-serial VLSI array processing chip for image processing

    NASA Technical Reports Server (NTRS)

    Heaton, Robert; Blevins, Donald; Davis, Edward

    1990-01-01

    An array processing chip integrating 128 bit-serial processing elements (PEs) on a single die is discussed. Each PE has a 16-function logic unit, a single-bit adder, a 32-b variable-length shift register, and 1 kb of local RAM. Logic in each PE provides the capability to mask PEs individually. A modified grid interconnection scheme allows each PE to communicate with each of its eight nearest neighbors. A 32-b bus is used to transfer data to and from the array in a single cycle. Instruction execution is pipelined, enabling all instructions to be executed in a single cycle. The 1-micron CMOS design contains over 1.1 x 10 to the 6th transistors on an 11.0 x 11.7-mm die.

  14. Expanding Coherent Array Processing to Larger Apertures Using Empirical Matched Field Processing

    SciTech Connect

    Ringdal, F; Harris, D B; Kvaerna, T; Gibbons, S J

    2009-07-23

    We have adapted matched field processing, a method developed in underwater acoustics to detect and locate targets, to classify transient seismic signals arising from mining explosions. Matched field processing, as we apply it, is an empirical technique, using observations of historic events to calibrate the amplitude and phase structure of wavefields incident upon an array aperture for particular repeating sources. The objective of this project is to determine how broadly applicable the method is and to understand the phenomena that control its performance. We obtained our original results in distinguishing events from ten mines in the Khibiny and Olenegorsk mining districts of the Kola Peninsula, for which we had exceptional ground truth information. In a cross-validation test, some 98.2% of 549 explosions were correctly classified by originating mine using just the Pn observations (2.5-12.5 Hz) on the ARCES array at ranges from 350-410 kilometers. These results were achieved despite the fact that the mines are as closely spaced as 3 kilometers. Such classification performance is significantly better than predicted by the Rayleigh limit. Scattering phenomena account for the increased resolution, as we make clear in an analysis of the information carrying capacity of Pn under two alternative propagation scenarios: free-space propagation and propagation with realistic (actually measured) spatial covariance structure. The increase in information capacity over a wide band is captured by the matched field calibrations and used to separate explosions from very closely-spaced sources. In part, the improvement occurs because the calibrations enable coherent processing at frequencies above those normally considered coherent. We are investigating whether similar results can be expected in different regions, with apertures of increasing scale and for diffuse seismicity. We verified similar performance with the closely-spaced Zapolyarni mines, though discovered that it may be

  15. MITAS: multisensor imaging technology for airborne surveillance

    NASA Astrophysics Data System (ADS)

    Thomas, John D.

    1991-08-01

    MITAS, a unique and low-cost solution to the problem of collecting and processing multisensor imaging data for airborne surveillance operations has been developed, MITAS results from integrating the established and proven real-time video processing, target tracking, and sensor management software of TAU with commercially available image exploitation and map processing software. The MITAS image analysis station (IAS) supports airborne day/night reconnaissance and surveillance missions involving low-altitude collection platforms employing a suite of sensors to perform reconnaissance functions against a variety of ground and sea targets. The system will detect, locate, and recognize threats likely to be encountered in support of counternarcotic operations and in low-intensity conflict areas. The IAS is capable of autonomous, near real-time target exploitation and has the appropriate communication links to remotely located IAS systems for more extended analysis of sensor data. The IAS supports the collection, fusion, and processing of three main imaging sensors: daylight imagery (DIS), forward looking infrared (FLIR), and infrared line scan (IRLS). The MITAS IAS provides support to all aspects of the airborne surveillance mission, including sensor control, real-time image enhancement, automatic target tracking, sensor fusion, freeze-frame capture, image exploitation, target data-base management, map processing, remote image transmission, and report generation.

  16. Two subroutines used in processing of arrayed data files

    NASA Astrophysics Data System (ADS)

    Wu, Guang-Jie

    Arrayed data files are commonly used in astronomy. It may be a text file compiled by the software "EDIT" in common use, or a table compiled by Microsoft WORD, Excel, or a FITS format etc. In the database of CDS (Centre de Données astronomiques de Strasbourg), there are over thousands star catalogues. Sometimes you may get a star catalogue from a colleague or friend of you, which may be done by multivarious computer software and may have peculiarity of sorts. Especially, the star catalogue had been compiled several years ago. You may often need to deal with such listed multidimensional data files, and you may need to make new listed data files by yourself. This processing for reduce-dimension or add-dimension, if it was a kind of row treatment, is very easy to do with some famous software like "EDIT". However, maybe you are facing a column treatment. It may bring some trouble to you. In some cases, a character "Tab" may exist in the file. Different software, even different printers made by a certain company, may give dissimilar treatment to the character "Tab". The problem is that a Table-key can denote a single space-key, or can be up to eight space-keys. Sometimes, it may not be easy to find a ready-made program in your hands. If this data file could be opened by the software "EDIT", two programs in this paper can help you to understand what happened there, and help you to solve the problem conveniently and easily. It includes to convert all of the Table-keys to be corresponding space-keys, to pick-up, delete, add blanks, or link two data files as two columns in one file.

  17. Adventures in Creating an Historical Multi-sensor Precipitation Dataset

    NASA Astrophysics Data System (ADS)

    Fuelberg, H. E.

    2008-05-01

    Florida State University has created a ten year historical multi-sensor precipitation dataset using the National Weather Service's (NWS) Multi-sensor Precipitation Estimator (MPE) software. MPE combines the high spatial resolution of radar-derived estimates with the assumed "ground truth" of the gauges. Input for the procedure included radar-derived hourly digital precipitation arrays on the 4 x 4 km HRAP grid, together with hourly rain gauge data from the National Climatic Data Center and five Florida Water Management Districts. This combination of gauge information provides comparatively high spatial resolution. The MPE output consists of hourly rainfall estimates on the 4 x 4 km grid. This paper will describe the many challenges that we faced in creating the multi-sensor data set. Some of the topics to be discussed are 1) Rain gauge data, even if said to have been quality controlled, still need a careful additional check. Objective procedures are needed due to the vast amount of hourly data, and it is challenging to develop a scheme that will catch most errors without deleting valid information. 2) The radar data also require careful scrutiny. Many types of false or erroneous returns lurk within the files and can lead to erroneous multi- sensor results. 3) The MPE procedure contains many adaptable parameters that need to be tuned to account for density of the available data and the character of the precipitation. These parameters generally will need to be changed based on the geographical area of study. Finally, examples of the MPE dataset will be shown along with brief comparisons with gauge data alone.

  18. Multisensor robot navigation system

    NASA Astrophysics Data System (ADS)

    Persa, Stelian; Jonker, Pieter P.

    2002-02-01

    Almost all robot navigation systems work indoors. Outdoor robot navigation systems offer the potential for new application areas. The biggest single obstacle to building effective robot navigation systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relative free of natural or man-made interference sources. The hybrid tracker combines rate gyros and accelerometers with compass and tilt orientation sensor and DGPS system. Sensor distortions, delays and drift required compensation to achieve good results. The measurements from sensors are fused together to compensate for each other's limitations. Analysis and experimental results demonstrate the system effectiveness. The paper presents a field experiment for a low-cost strapdown-IMU (Inertial Measurement Unit)/DGPS combination, with data processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost ISA (Inertial Sensor Assembly) and because of the relatively small area of the trajectory. The scope of this experiment was to test the feasibility of an integrated DGPS/IMU system of this type and to develop a field evaluation procedure for such a combination.

  19. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  20. Structure and Process of Infrared Hot Electron Transistor Arrays

    PubMed Central

    Fu, Richard

    2012-01-01

    An infrared hot-electron transistor (IHET) 5 × 8 array with a common base configuration that allows two-terminal readout integration was investigated and fabricated for the first time. The IHET structure provides a maximum factor of six in improvement in the photocurrent to dark current ratio compared to the basic quantum well infrared photodetector (QWIP), and hence it improved the array S/N ratio by the same factor. The study also showed for the first time that there is no electrical cross-talk among individual detectors, even though they share the same emitter and base contacts. Thus, the IHET structure is compatible with existing electronic readout circuits for photoconductors in producing sensitive focal plane arrays. PMID:22778655

  1. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    PubMed

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  2. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  3. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  4. Dimpled Ball Grid Array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    The 472 Dimpled Ball Grid Array (D-BGA) package has not been used in past space flight environments, therefore it is necessary to determine the robustness and reliability of the solder joints. The 472 D-BGA packages passed the above environmental tests within the specifications and are now qualified for use on space flight electronics.

  5. Orbital Processing of Eutectic Rod-Like Arrays

    NASA Technical Reports Server (NTRS)

    Larson, David J., Jr.

    1998-01-01

    The eutectic is one of only three solidification classes that exist. The others are isostructural and peritectic-class reactions, respectively. Simplistically, in a binaryeutectic phase diagram, a single liquid phase isothermally decomposes to two solid phases in a cooperative manner. The melting point minimum at the eutectic composition, isothermal solidification temperature, near-isocompositional solidification and refined solidification microstructure lend themselves naturally to such applications as brazing and soldering; industries that eutectic alloys dominate. Interest in direct process control of microstructures has led, more recently, to in-situ eutectic directional solidification with applications in electro-magnetics and electro-optics. In these cases, controlled structural refinement and the high aspect ratio and regularity of the distributed eutectic phases is highly significant to the fabrication and application of these in-situ natural composites. The natural pattern formation and scaling of the dispersed phase on a sub-micron scale has enormous potential application, since fabricating bulk materials on this scale mechanically has proven to be particularly difficult. It is thus of obvious importance to understand the solidification of eutectic materials since they are of great commercial significance. The dominant theory that describes eutectic solidification was derived for diffusion-controlled growth of alloys where both solid eutectic phases solidify metallically, i.e. without faceting at the solidification interface. Both high volume fraction (lamellar) and low volume fraction (rod-like) regular metallic arrays are treated by this theory. Many of the useful solders and brazements, however, and most of the regular in-situ composites are characterized by solidification reactions that are faceted/non-faceted in nature, rather than doubly non-faceted (metallic). Further, diffusion-controlled growth conditions are atypical terrestrially since

  6. Body-Attachable and Stretchable Multisensors Integrated with Wirelessly Rechargeable Energy Storage Devices.

    PubMed

    Kim, Daeil; Kim, Doyeon; Lee, Hyunkyu; Jeong, Yu Ra; Lee, Seung-Jung; Yang, Gwangseok; Kim, Hyoungjun; Lee, Geumbee; Jeon, Sanggeun; Zi, Goangseup; Kim, Jihyun; Ha, Jeong Sook

    2016-01-27

    A stretchable multisensor system is successfully demonstrated with an integrated energy-storage device, an array of microsupercapacitors that can be repeatedly charged via a wireless radio-frequency power receiver on the same stretchable polymer substrate. The integrated devices are interconnected by a liquid-metal interconnection and operate stably without noticeable performance degradation under strain due to the skin attachment, and a uniaxial strain up to 50%. PMID:26641239

  7. Acoustic analysis by spherical microphone array processing of room impulse responses.

    PubMed

    Khaykin, Dima; Rafaely, Boaz

    2012-07-01

    Spherical microphone arrays have been recently used for room acoustics analysis, to detect the direction-of-arrival of early room reflections, and compute directional room impulse responses and other spatial room acoustics parameters. Previous works presented methods for room acoustics analysis using spherical arrays that are based on beamforming, e.g., delay-and-sum, regular beamforming, and Dolph-Chebyshev beamforming. Although beamforming methods provide useful directional selectivity, optimal array processing methods can provide enhanced performance. However, these algorithms require an array cross-spectrum matrix with a full rank, while array data based on room impulse responses may not satisfy this condition due to the single frame data. This paper presents a smoothing technique for the cross-spectrum matrix in the frequency domain, designed for spherical microphone arrays, that can solve the problem of low rank when using room impulse response data, therefore facilitating the use of optimal array processing methods. Frequency smoothing is shown to be performed effectively using spherical arrays, due to the decoupling of frequency and angular components in the spherical harmonics domain. Experimental study with data measured in a real auditorium illustrates the performance of optimal array processing methods such as MUSIC and MVDR compared to beamforming. PMID:22779475

  8. On the design of systolic-array architectures with applications to signal processing

    SciTech Connect

    Niamat, M.Y.

    1989-01-01

    Systolic arrays are networks of processors that rhythmically compute and paw data through systems. These arrays feature the important properties of modularity, regularity, local interconnections, and a high degree of pipelining and multiprocessing. In this dissertation, several systolic arrays are proposed with applications to real-time signal processing. Specifically, these arrays are designed for the rapid computation of position velocities, accelerations, and jerks associated with motion. Real-time computations of these parameters arise in many applications, notably in the areas of robotics, image-processing, remote signal processing, and computer-controlled machines. The systolic arrays proposed in this dissertation can be classified into the linear, the triangular, and the mesh connected types. In the linear category, six different systolic designs are presented. The relative merits of these designs are discussed in detail. It is found from the analysis of these designs that each of these arrays achieves a proportional increase in time. Also, by interleaving the input data items in some of these designs, the throughput rate is further doubled. This also increases the processor utilization rate to 100%. The triangular type systolic array is found to be useful when all three parameters are to be computed simultaneously, and the mesh type, when the number of signals to be processed are extremely large. The effect of direct broadcasting of data to the processing cells is also investigated. Finally, the utility of the proposed systolic arrays is illustrated by a practical design example.

  9. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  10. Adaptive passive fathometer processing using ambient noise received by vertical nested array

    NASA Astrophysics Data System (ADS)

    Kim, Junghun; Cho, Sungho; Choi, Jee Woong

    2015-07-01

    A passive fathometer technique utilizes surface-generated ambient noise received by a vertical line array as a sound source to estimate the depths of water-sediment interface and sub-bottom layers. Ambient noise was measured using a 24-channel, vertical nested line array consisting of four sub-arrays, in shallow water off the eastern coast of Korea. In this paper, nested array processing is applied to passive fathometer technique to improve the performance. Passive fathometer processing is performed for each sub-array, and the results are then combined to form a passive fathometer output for broadband ambient noise. Three types of beamforming technique, including conventional and two adaptive methods, are used in passive fathometer processing. The results are compared to the depths of water-sediment interface measured by an echo sounder. As a result, it is found that the adaptive methods have better performance than the conventional method.

  11. Signal processing and compensation electronics for junction field-effect transistor /JFET/ focal plane arrays

    NASA Astrophysics Data System (ADS)

    Wittig, K. R.

    1982-06-01

    A signal processing system has been designed and constructed for a pyroelectric infrared area detector which uses a matrix-addressable JFET array for readout and for on-focal plane preamplification. The system compensates for all offset and gain nonuniformities in and after the array. Both compensations are performed in real time at standard television rates, so that changes in the response characteristics of the array are automatically corrected for. Two-point compensation is achieved without the need for two separate temperature references. The focal plane circuitry used to read out the array, the offset and gain compensation algorithms, the architecture of the signal processor, and the system hardware are described.

  12. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  13. Batch-processed close-track array heads (abstract)

    NASA Astrophysics Data System (ADS)

    Tang, D. D.; Santini, H.; Lee, R. E.; Ju, K.; Krounbi, M.

    1997-04-01

    This article describes novel array heads for close packed track recording. The heads are batch fabricated on wafers in a linear fashion (Fig. 1). These 60-turn thin-film inductive heads are designed with 6 μm pitch helical coils and planar side-by-side P1/G/P2 yoke structures. The linear head array is placed along the upstream-to-downstream direction of the track. By skewing the array slightly off the track direction, each head of the array aligns to an individual track (Fig. 2). In this case, the track pitch is about 5 μm, which is the yoke height. With this head arrangement, even though thermal expansion causes the head-to-head distance to increase along the upstream-downstream direction, it does not cause a thermally induced track misregistration problem. The increased head-to-head distance only affects the timing of signals between tracks, which can be compensated by the channel electronics. Thus, the thermally induced track misregistration problem is eliminated using this design. The guardbands between tracks are not necessary and a close-packed track recording is possible. A state of the art head impedance of the 60-turn head is obtained: 11 Ω and 0.40 μH. The gap-to-gap pitch is 100 μm. The overall head-to-head isolation is greater than 50 dB at 10 MHz. Such a large isolation is realized by suppressing the capacitive coupling between lead wires using a ground plane and grounded wall structures. The tight winding of the helical coils reduces the magnetic coupling between the heads.

  14. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  15. Visual programming environment for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Hall, David L.; Kasmala, Gerald

    1996-06-01

    In recent years, numerous multisensor data fusion systems have been developed for a wide variety of applications. Defense related applications include; automatic target recognition systems, identification-friend-foe-neutral, automated situation assessment and threat assessment systems, and systems for smart weapons. Non-defense applications include; robotics, condition-based maintenance, environmental monitoring, and medical diagnostics. For each of these applications, multiple sensor data are combined to achieve inferences which are not generally possible using only a single sensor. Implementation of these data fusion systems often involves a significant amount of effort. In particular, software must be developed for components such as data base access, human computer interfaces and displays, communication software, and data fusion algorithms. While commercial software packages exist to assist development of data bases, communications, and human computer interfaces, there are no general purpose packages available to support the implementation of the data fusion algorithms. This paper describes a visual programming tool developed to assist in rapid prototyping of data fusion systems. This toolkit is modeled after the popular tool, Khoros, used by the image processing community. The tool described here is written in visual C, and provides the capability to rapidly implement and apply data fusion algorithms. An application to condition based maintenance is described.

  16. SAIM: a mobile multisensor image exploitation system

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2000-11-01

    The control of information is an essential part of operations. Technology allows today a near real time surveillance capacity, over wide areas, due to sensor performances, communication networks. The system presented herein has been developed by Thomson-Csf, under contract with the French MOD to give to the decision makers the right information, in a very short delay, and prepare support information, to help for decision. The SAIM, Mobile Multisensor Image Exploitation Ground System, uses near real time acquisition units, very large data base management, data processing, including fusion and decision aiding tools, and communication networks. It then helps for all the steps of exploitation of data incoming from image sensors, form preparation of the reconnaissance mission to the dissemination of intelligence. The SAIM system is in operations in the French Air Force, and soon in the French Navy and the French Army. Initially defined for the specific use of French Recce sensors, the SAIM is now intended to be widely used for the exploitation of UAV and battle field MTI and SAR surveillance systems.

  17. Kalman filter-based microphone array signal processing using the equivalent source model

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Chen, Ching-Cheng

    2012-10-01

    This paper demonstrates that microphone array signal processing can be implemented by using adaptive model-based filtering approaches. Nearfield and farfield sound propagation models are formulated into state-space forms in light of the Equivalent Source Method (ESM). In the model, the unknown source amplitudes of the virtual sources are adaptively estimated by using Kalman filters (KFs). The nearfield array aimed at noise source identification is based on a Multiple-Input-Multiple-Output (MIMO) state-space model with minimal realization, whereas the farfield array technique aimed at speech quality enhancement is based on a Single-Input-Multiple-Output (SIMO) state-space model. Performance of the nearfield array is evaluated in terms of relative error of the velocity reconstructed on the actual source surface. Numerical simulations for the nearfield array were conducted with a baffled planar piston source. From the error metric, the proposed KF algorithm proved effective in identifying noise sources. Objective simulations and subjective experiments are undertaken to validate the proposed farfield arrays in comparison with two conventional methods. The results of objective tests indicated that the farfield arrays significantly enhanced the speech quality and word recognition rate. The results of subjective tests post-processed with the analysis of variance (ANOVA) and a post-hoc Fisher's least significant difference (LSD) test have shown great promise in the KF-based microphone array signal processing techniques.

  18. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  19. A solar array module fabrication process for HALE solar electric UAVs

    SciTech Connect

    Carey, P.G.; Aceves, R.C.; Colella, N.J.; Thompson, J.B.; Williams, K.A.

    1993-12-01

    We describe a fabrication process to manufacture high power to weight ratio flexible solar array modules for use on high altitude long endurance (HALE) solar electric unmanned air vehicles (UAVs). A span-loaded flying wing vehicle, known as the RAPTOR Pathfinder, is being employed as a flying test bed to expand the envelope of solar powered flight to high altitudes. It requires multiple light weight flexible solar array modules able to endure adverse environmental conditions. At high altitudes the solar UV flux is significantly enhanced relative to sea level, and extreme thermal variations occur. Our process involves first electrically interconnecting solar cells into an array followed by laminating them between top and bottom laminated layers into a solar array module. After careful evaluation of candidate polymers, fluoropolymer materials have been selected as the array laminate layers because of their inherent abilities to withstand the hostile conditions imposed by the environment.

  20. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  1. Design, processing and testing of LSI arrays, hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.

    1979-01-01

    Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.

  2. Integrated multisensor perimeter detection systems

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.

    2007-10-01

    The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.

  3. Radar imaging and high-resolution array processing applied to a classical VHF-ST profiler

    NASA Astrophysics Data System (ADS)

    Hélal, D.; Crochet, M.; Luce, H.; Spano, E.

    2001-01-01

    Among the spaced antenna methods used in the field of atmospheric studies, radar interferometry has been of great interest for many authors. A first approach is to use the phase information contained in the cross-spectra between antenna output signals and to retrieve direction of arrival (DOA) of discrete scatterers. The second one introduces a phase shift between the antenna signals in order to steer the main beam of the antenna towards a desired direction. This paper deals with the later technique and presents a variant of postset beam steering (PBS) which does not require a multi-receiver system. Indeed, the data samples are taken alternately on each antenna by means of high-commutation-rate switches inserted before a unique receiver. This low-cost technique is called ``sequential PBS'' (SPBS) and has been implemented on two classical VHF-ST radars. The present paper shows that high flexibility of SPBS in angular scanning allows to perform radar imaging. Despite a limited maximum range due to the antennas' scanning, the collected data give a view of the boundary layer and the lower troposphere over a wide horizontal extent, with characteristic horizontally stratified structures in the lower troposphere. These structures are also detected by application of high-resolution imaging processing such as Capon's beamforming or Multiple Signal Classification algorithm. The proposed method can be a simple way to enhance the versatility of classical DBS radars in order to extend them for multi-sensor applications and local meteorology.

  4. ARC - A source of multisensor satellite data for polar science

    NASA Technical Reports Server (NTRS)

    Van Woert, Michael L.; Whritner, Robert H.; Waliser, Duane E.; Bromwich, David H.; Comiso, J. C.

    1992-01-01

    The NSF's Antarctic Research Center (ARC) has been established to furnish real-time polar-orbiting satellite data in support of Antarctic field studies, as well as to maintain a multisensor satellite data (MSD) archive for retrospective data analysis. An account is presently given of the ways in which the complementary nature of an MSD set can deepen understanding of Antarctic physical processes. An active microwave SAR with 30-m resolution and a radar altimeter will be added to the ARC resources later in this decade, as will the Earth Observing System.

  5. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  6. Adaptive learning of Multi-Sensor Integration techniques with genetic algorithms

    SciTech Connect

    Baker, J.E.

    1994-06-01

    This research focuses on automating the time-consuming process of developing and optimizing multi-sensor integration techniques. Our approach is currently based on adaptively learning how to exploit low-level image detail. Although this system is specifically designed to be both sensor and application domain independent, an empirical validation with actual multi-modal sensor data is presented.

  7. View and sensor planning for multi-sensor surface inspection

    NASA Astrophysics Data System (ADS)

    Gronle, Marc; Osten, Wolfgang

    2016-06-01

    Modern manufacturing processes enable the precise fabrication of high-value parts with high precision and performance. At the same time, the demand for flexible on-demand production of individual objects is continuously increasing. These requirements can only be met if inspection systems provide appropriate answers. One solution is the use of flexible, multi-sensor setups where multiple optical sensors with different fields of application are combined in one system. However, the challenge is then to assist the user in planning the inspection for individual parts. A manual planning requires an expert knowledge of the performance and functionality of every sensor. Therefore, software assistant systems help the user to objectively select the right sensors for a given inspection task. The planning step becomes still more difficult if the manufactured part has a complex form. The implication is that a sensor’s position must also be part of the planning process since it significantly influences the quality of the inspection. This paper describes a view and sensor planning approach for a multi-sensor surface inspection system in the context of optical topography measurements in the micro- and meso-scale range. In order to realize an online processing of the assistant system, a significant part of the calculations are done on the graphics processing unit (GPU).

  8. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  9. Airborne Multisensor Pod System (AMPS) data management overview

    SciTech Connect

    Wiberg, J.D.; Blough, D.K.; Daugherty, W.R.; Hucks, J.A.; Gerhardstein, L.H.; Meitzler, W.D.; Melton, R.B.; Shoemaker, S.V.

    1994-09-01

    An overview of the Data Management Plan for the Airborne Multisensor Pod System (AMPS) pro-grain is provided in this document. The Pacific Northwest Laboratory (PNL) has been assigned the responsibility of data management for the program, which includes defining procedures for data management and data quality assessment. Data management is defined as the process of planning, acquiring, organizing, qualifying and disseminating data. The AMPS program was established by the U.S. Department of Energy (DOE), Office of Arms Control and Non-Proliferation (DOE/AN) and is integrated into the overall DOE AN-10.1 technology development program. Sensors used for collecting the data were developed under the on-site inspection, effluence analysis, and standoff sensor program, the AMPS program interacts with other technology programs of DOE/NN-20. This research will be conducted by both government and private industry. AMPS is a research and development program, and it is not intended for operational deployment, although the sensors and techniques developed could be used in follow-on operational systems. For a complete description of the AMPS program, see {open_quotes}Airborne Multisensor Pod System (AMPS) Program Plan{close_quotes}. The primary purpose of the AMPS is to collect high-quality multisensor data to be used in data fusion research to reduce interpretation problems associated with data overload and to derive better information than can be derived from any single sensor. To collect the data for the program, three wing-mounted pods containing instruments with sensors for collecting data will be flight certified on a U.S. Navy RP-3A aircraft. Secondary objectives of the AMPS program are sensor development and technology demonstration. Pod system integrators and instrument developers will be interested in the performance of their deployed sensors and their supporting data acquisition equipment.

  10. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  11. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  12. Model-based Processing of Microcantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Sinensky, A K; Lee, C L; Rudd, R E; Burnham, A K

    2005-04-27

    We have developed a model-based processor (MBP) for a microcantilever-array sensor to detect target species in solution. We perform a proof-of-concept experiment, fit model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest, averaged deflection data and multi-channel data. For this evaluation we extract model parameters via a model-based estimation, perform a Gauss-Markov simulation, design the optimal MBP and apply it to measured experimental data. The performance of the MBP in the multi-channel case is evaluated by comparison to a ''smoother'' (averager) typically used for microcantilever signal analysis. It is shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, apart from a correctable systematic bias error.

  13. A basic experimental study of ultrasonic assisted hot embossing process for rapid fabrication of microlens arrays

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Yu, Che-Hao

    2015-02-01

    This paper reports a highly effective technique for rapid fabrication of microlens arrays based on an ultrasonic assisted hot embossing process. In this method, a thin stainless steel mold with micro-holes array is fabricated by a photolithography and wet etching process. Then, the thin stainless steel mold with micro-holes array is placed on top of a plastic substrate (PMMA plate) and the stack is placed in an ultrasonic vibration embossing machine. During ultrasonic assisted hot embossing operation, the surface of the stainless steel mold with micro-holes array presses against the thermoplastic PMMA substrate. Under proper ultrasonic vibration time, embossing pressure and hold time, the softened polymer will just partially fill the circular holes and due to surface tension, form a convex lens surface. After the stainless steel mold is removed, the microlens array patterns on the surface of plastic substrate can be obtained. The total cycle time is less than 10 s. Finally, geometrical and optical properties of the fabricated plastic microlens arrays were measured and proved satisfactory. This technique shows great potential for fabricating microlens array on plastic substrates with high productivity and low cost.

  14. Site Specific Evaluation of Multisensor Capacitance Probes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multisensor capacitance probes (MCPs) are widely used for measuring soil water content (SWC) at the field scale. Although manufacturers supply a generic MCP calibration, many researchers recognize that MCPs should be calibrated for specific field conditions. MCPs measurements are typically associa...

  15. Hybridization process for back-illuminated silicon Geiger-mode avalanche photodiode arrays

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Westhoff, Richard C.; Loomis, Andrew H.; Young, Douglas J.; Ciampi, Joseph S.; Aull, Brian F.; Reich, Robert K.

    2010-04-01

    We present a unique hybridization process that permits high-performance back-illuminated silicon Geiger-mode avalanche photodiodes (GM-APDs) to be bonded to custom CMOS readout integrated circuits (ROICs) - a hybridization approach that enables independent optimization of the GM-APD arrays and the ROICs. The process includes oxide bonding of silicon GM-APD arrays to a transparent support substrate followed by indium bump bonding of this layer to a signal-processing ROIC. This hybrid detector approach can be used to fabricate imagers with high-fill-factor pixels and enhanced quantum efficiency in the near infrared as well as large-pixel-count, small-pixel-pitch arrays with pixel-level signal processing. In addition, the oxide bonding is compatible with high-temperature processing steps that can be used to lower dark current and improve optical response in the ultraviolet.

  16. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  17. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  18. Multisensor fusion for 3-D defect characterization using wavelet basis function neural networks

    NASA Astrophysics Data System (ADS)

    Lim, Jaein; Udpa, Satish S.; Udpa, Lalita; Afzal, Muhammad

    2001-04-01

    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, has the ability to draw inferences that may not be feasible with data from a single sensor alone. In this paper, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL. Data is fused at the signal level. If the flux is oriented axially, the samples of the axial signal are measured along a direction parallel to the flaw, while the circumferential signal is measured in a direction that is perpendicular to the flaw. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. A boundary extraction algorithm is used to extract the defect areas in the image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. Finally, a wavelet basis function (WBF) neural network is employed to map the complex valued image appropriately to obtain the geometrical profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. Results show the effectiveness of the approach.

  19. An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study.

    PubMed

    Weinstein, Ronald S; Descour, Michael R; Liang, Chen; Barker, Gail; Scott, Katherine M; Richter, Lynne; Krupinski, Elizabeth A; Bhattacharyya, Achyut K; Davis, John R; Graham, Anna R; Rennels, Margaret; Russum, William C; Goodall, James F; Zhou, Pixuan; Olszak, Artur G; Williams, Bruce H; Wyant, James C; Bartels, Peter H

    2004-11-01

    This paper describes the design and fabrication of a novel array microscope for the first ultrarapid virtual slide processor (DMetrix DX-40 digital slide scanner). The array microscope optics consists of a stack of three 80-element 10 x 8-lenslet arrays, constituting a "lenslet array ensemble." The lenslet array ensemble is positioned over a glass slide. Uniquely shaped lenses in each of the lenslet arrays, arranged perpendicular to the glass slide constitute a single "miniaturized microscope." A high-pixel-density image sensor is attached to the top of the lenslet array ensemble. In operation, the lenslet array ensemble is transported by a motorized mechanism relative to the long axis of a glass slide. Each of the 80 miniaturized microscopes has a lateral field of view of 250 microns. The microscopes of each row of the array are offset from the microscopes in other rows. Scanning a glass slide with the array microscope produces seamless two-dimensional image data of the entire slide, that is, a virtual slide. The optical system has a numerical aperture of N.A.= 0.65, scans slides at a rate of 3 mm per second, and accrues up to 3,000 images per second from each of the 80 miniaturized microscopes. In the ultrarapid virtual slide processing cycle, the time for image acquisition takes 58 seconds for a 2.25 cm2 tissue section. An automatic slide loader enables the scanner to process up to 40 slides per hour without operator intervention. Slide scanning and image processing are done concurrently so that post-scan processing is eliminated. A virtual slide can be viewed over the Internet immediately after the scanning is complete. A validation study compared the diagnostic accuracy of pathologist case readers using array microscopy (with images viewed as virtual slides) and conventional light microscopy. Four senior pathologists diagnosed 30 breast surgical pathology cases each using both imaging modes, but on separate occasions. Of 120 case reads by array microscopy

  20. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  1. Mathematical Modeling of a Solar Arrays Deploying Process at Ground Tests

    NASA Astrophysics Data System (ADS)

    Tomilin, A.; Shpyakin, I.

    2016-04-01

    This paper focuses on the creating of a mathematical model of a solar array deploying process during ground tests. Lagrange equation was used to obtain the math model. The distinctive feature of this mathematical model is the possibility of taking into account the gravity compensation system influence on the construction in the deploying process and the aerodynamic resistance during ground tests.

  2. Asymmetric magnetization reversal process in Co nanohill arrays

    SciTech Connect

    Rosa, W. O.; Martinez, L.; Jaafar, M.; Asenjo, A.; Vazquez, M.

    2009-11-15

    Co thin films deposited by sputtering onto nanostructured polymer [poly(methyl methacrylate)] were prepared following replica-antireplica process based on porous alumina membrane. In addition, different capping layers were deposited onto Co nanohills. Morphological and compositional analysis was performed by atomic force microscopy and x-ray photoemission spectroscopy techniques to obtain information about the surface characteristics. The observed asymmetry in the magnetization reversal process at low temperatures is ascribed to the exchange bias generated by the ferromagnetic-antiferromagnetic interface promoted by the presence of Co oxide detected in all the samples. Especially relevant is the case of the Cr capping, where an enhanced magnetic anisotropy in the Co/Cr interface is deduced.

  3. Calibrating a novel multi-sensor physical activity measurement system

    PubMed Central

    John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S

    2011-01-01

    Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper, describes a novel multi-sensor ‘Integrated PA Measurement System’ (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors vs. outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance feasibility of free-living use are proposed and refinement of the prediction techniques is discussed. PMID:21813941

  4. Design, processing and testing of LSI arrays: Hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.

    1979-01-01

    Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.

  5. Array Processing for Radar Clutter Reduction and Imaging of Ice-Bed Interface

    NASA Astrophysics Data System (ADS)

    Gogineni, P.; Leuschen, C.; Li, J.; Hoch, A.; Rodriguez-Morales, F.; Ledford, J.; Jezek, K.

    2007-12-01

    A major challenge in sounding of fast-flowing glaciers in Greenland and Antarctica is surface clutter, which masks weak returns from the ice-bed interface. The surface clutter is also a major problem in sounding and imaging sub-surface interfaces on Mars and other planets. We successfully applied array-processing techniques to reduce clutter and image ice-bed interfaces of polar ice sheets. These techniques and tools have potential applications to planetary observations. We developed a radar with array-processing capability to measure thickness of fast-flowing outlet glaciers and image the ice-bed interface. The radar operates over the frequency range from 140 to 160 MHz with about an 800- Watt peak transmit power with transmit and receive antenna arrays. The radar is designed such that pulse width and duration are programmable. The transmit-antenna array is fed with a beamshaping network to obtain low sidelobes. We designed the receiver such that it can process and digitize signals for each element of an eight- channel array. We collected data over several fast-flowing glaciers using a five-element antenna array, limited by available hardpoints to mount antennas, on a Twin Otter aircraft during the 2006 field season and a four-element array on a NASA P-3 aircraft during the 2007 field season. We used both adaptive and non-adaptive signal-processing algorithms to reduce clutter. We collected data over the Jacobshavn Isbrae and other fast-flowing outlet glaciers, and successfully measured the ice thickness and imaged the ice-bed interface. In this paper, we will provide a brief description of the radar, discuss clutter-reduction algorithms, present sample results, and discuss the application of these techniques to planetary observations.

  6. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  7. Two step process for the fabrication of diffraction limited concave microlens arrays.

    PubMed

    Ruffieux, Patrick; Scharf, Toralf; Philipoussis, Irène; Herzig, Hans Peter; Voelkel, Reinhard; Weible, Kenneth J

    2008-11-24

    A two step process has been developed for the fabrication of diffraction limited concave microlens arrays. The process is based on the photoresist filling of melted holes obtained by a preliminary photolithography step. The quality of these microlenses has been tested in a Mach-Zehnder interferometer. The method allows the fabrication of concave microlens arrays with diffraction limited optical performance. Concave microlenses with diameters ranging between 30 microm to 230 microm and numerical apertures up to 0.25 have been demonstrated. As an example, we present the realization of diffusers obtained with random sizes and locations of concave shapes. PMID:19030040

  8. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  9. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  10. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  11. Effects of process parameters on the molding quality of the micro-needle array

    NASA Astrophysics Data System (ADS)

    Qiu, Z. J.; Ma, Z.; Gao, S.

    2016-07-01

    Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.

  12. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  13. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  14. Systolic arrays for binary image processing by using Boolean differential operators

    NASA Astrophysics Data System (ADS)

    Shmerko, V. P.; Yanushkevich, S. N.; Kochergov, E. G.

    1993-11-01

    A matrix form of the Boolean differential temporal (parametric) operators is proposed. The procedures of preliminary binary image processing (logic filtering, finding of contours) are constructed on this base. This presentation of the operators allows to synthesize the algorithms having a mapping into an architecture of systolic arrays.

  15. Assembly, integration, and verification (AIV) in ALMA: series processing of array elements

    NASA Astrophysics Data System (ADS)

    Lopez, Bernhard; Jager, Rieks; Whyborn, Nicholas D.; Knee, Lewis B. G.; McMullin, Joseph P.

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA will consist of at least 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. It is the responsibility of ALMA AIV to deliver the fully assembled, integrated, and verified antennas (array elements) to the telescope array. After an initial phase of infrastructure setup AIV activities began when the first ALMA antenna and subsystems became available in mid 2008. During the second semester of 2009 a project-wide effort was made to put in operation a first 3- antenna interferometer at the Array Operations Site (AOS). In 2010 the AIV focus was the transition from event-driven activities towards routine series production. Also, due to the ramp-up of operations activities, AIV underwent an organizational change from an autonomous department into a project within a strong matrix management structure. When the subsystem deliveries stabilized in early 2011, steady-state series processing could be achieved in an efficient and reliable manner. The challenge today is to maintain this production pace until completion towards the end of 2013. This paper describes the way ALMA AIV evolved successfully from the initial phase to the present steady-state of array element series processing. It elaborates on the different project phases and their relationships, presents processing statistics, illustrates the lessons learned and relevant best practices, and concludes with an outlook of the path towards completion.

  16. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  17. A novel method using adaptive hidden semi-Markov model for multi-sensor monitoring equipment health prognosis

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Dong, Ming; Lv, Wenyuan; Geng, Xiuli; Li, Yupeng

    2015-12-01

    Health prognosis for equipment is considered as a key process of the condition-based maintenance strategy. This paper presents an integrated framework for multi-sensor equipment diagnosis and prognosis based on adaptive hidden semi-Markov model (AHSMM). Unlike hidden semi-Markov model (HSMM), the basic algorithms in an AHSMM are first modified in order for decreasing computation and space complexity. Then, the maximum likelihood linear regression transformations method is used to train the output and duration distributions to re-estimate all unknown parameters. The AHSMM is used to identify the hidden degradation state and obtain the transition probabilities among health states and durations. Finally, through the proposed hazard rate equations, one can predict the useful remaining life of equipment with multi-sensor information. Our main results are verified in real world applications: monitoring hydraulic pumps from Caterpillar Inc. The results show that the proposed methods are more effective for multi-sensor monitoring equipment health prognosis.

  18. Adaptive and mobile ground sensor array.

    SciTech Connect

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer; Maish, Alexander B.

    2003-12-01

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.

  19. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    PubMed Central

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  20. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only.

    PubMed

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F(2)/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  1. Enhancement of Data Analysis Through Multisensor Data Fusion Technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multi-sensor data fusion is an emerging technology that fuses data from multiple sensors in order to make a more accurate estimation of the environment through measurement and detection. Applications of multi-sensor data fusion cross a wide spectrum in military and civilian areas. With the rapid e...

  2. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  3. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    NASA Astrophysics Data System (ADS)

    Warburton, W. K.; Harris, J. T.; Friedrich, S.

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100-2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays - currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I-V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  4. Fabrication of microlens arrays by a rolling process with soft polydimethylsiloxane molds

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Hsieh, Hsin-Ta; Su, Guo-Dung John

    2011-06-01

    In this paper, we present a new roll-to-roll method to fabricate visible light transparent microlens arrays on a glass substrate by using soft and cost-effective polydimethylsiloxane (PDMS) molds. First, we fabricated microlens array master molds by photoresist thermal reflow processes on silicon substrates. We then transferred the pattern to PDMS molds by a spin coater. After making the PDMS molds, we used a two-wheel roll-to-roll printing machine to replicate ultraviolet resin microlens arrays on glass substrates. The PDMS molds can be made easily at a low cost compared with traditional electroplating metal molds. We studied the quality of microlens arrays that were replicated by different rolling pressures of 20, 200 and 500 N cm-2. We also identified the relation between the pressure and the shape of the microlens arrays. The results showed that the best yield rate and replication performance were achieved with a pressure of approximately 200 N cm-2 and 4 min of ultraviolet light exposure.

  5. Multiplexed optical operation of nanoelectromechanical systems (NEMS) arrays for sensing and signal-processing applications

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin

    2014-06-01

    NEMS are rapidly being developed for a variety of sensing applications as well as for exploring interesting regimes in fundamental physics. In most of these endeavors, operation of a NEMS device involves actuating the device harmonically around its fundamental resonance and detecting subsequent motion while the device interacts with its environment. Even though a single NEMS resonator is exceptionally sensitive, a typical application, such as sensing or signal processing, requires the detection of signals from many resonators distributed over the surface of a chip. Therefore, one of the key technological challenges in the field of NEMS is development of multiplexed measurement techniques to detect the motion of a large number of NEMS resonators simultaneously. In this work, we address the important and difficult problem of interfacing with a large number of NEMS devices and facilitating the use of such arrays in, for example, sensing and signal processing applications. We report a versatile, all-optical technique to excite and read-out a distributed NEMS array. The NEMS array is driven by a distributed, intensity-modulated, optical pump through the photothermal effect. The ensuing vibrational response of the array is multiplexed onto a single, probe beam as a high-frequency phase modulation. The phase modulation is optically down converted to a low-frequency, intensity modulation using an adaptive full -field interferometer, and subsequently is detected using a charge-coupled device (CCD) array. Rapid and single-step mechanical characterization of approximately 60 nominally identical, high-frequency resonators is demonstrated. The technique may enable sensitivity improvements over single NEMS resonators by averaging signals coming from a multitude of devices in the array. In addition, the diffraction-limited spatial resolution may allow for position-dependent read-out of NEMS sensor chips for sensing multiple analytes or spatially inhomogeneous forces.

  6. High resolution beamforming on large aperture vertical line arrays: Processing synthetic data

    NASA Astrophysics Data System (ADS)

    Tran, Jean-Marie Q.; Hodgkiss, William S.

    1990-09-01

    This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.

  7. MULTI-SENSOR TOWED ARRAY DETECTION SYSTEM (MTADS)

    EPA Science Inventory

    UXO is a serious and prevalent environmental problem currently facing DoD facility managers. Mitigation and remediation activities are often hindered by the fact that UXO is colocated with other environmental threats including ordnance explosives wastes (OEW), chemical wastes, an...

  8. Hybrid Arrays for Chemical Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  9. Multi-sensor data fusion framework for CNC machining monitoring

    NASA Astrophysics Data System (ADS)

    Duro, João A.; Padget, Julian A.; Bowen, Chris R.; Kim, H. Alicia; Nassehi, Aydin

    2016-01-01

    Reliable machining monitoring systems are essential for lowering production time and manufacturing costs. Existing expensive monitoring systems focus on prevention/detection of tool malfunctions and provide information for process optimisation by force measurement. An alternative and cost-effective approach is monitoring acoustic emissions (AEs) from machining operations by acting as a robust proxy. The limitations of AEs include high sensitivity to sensor position and cutting parameters. In this paper, a novel multi-sensor data fusion framework is proposed to enable identification of the best sensor locations for monitoring cutting operations, identifying sensors that provide the best signal, and derivation of signals with an enhanced periodic component. Our experimental results reveal that by utilising the framework, and using only three sensors, signal interpretation improves substantially and the monitoring system reliability is enhanced for a wide range of machining parameters. The framework provides a route to overcoming the major limitations of AE based monitoring.

  10. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  11. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  12. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    PubMed Central

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  13. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-07-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  14. Comprehensive exon array data processing method for quantitative analysis of alternative spliced variants

    PubMed Central

    Chen, Ping; Lepikhova, Tatiana; Hu, Yizhou; Monni, Outi; Hautaniemi, Sampsa

    2011-01-01

    Alternative splicing of pre-mRNA generates protein diversity. Dysfunction of splicing machinery and expression of specific transcripts has been linked to cancer progression and drug response. Exon microarray technology enables genome-wide quantification of expression levels of the majority of exons and facilitates the discovery of alternative splicing events. Analysis of exon array data is more challenging than the analysis of gene expression data and there is a need for reliable quantification of exons and alternatively spliced variants. We introduce a novel, computationally efficient methodology, Multiple Exon Array Preprocessing (MEAP), for exon array data pre-processing, analysis and visualization. We compared MEAP with existing pre-processing methods, and validation of six exons and two alternatively spliced variants with qPCR corroborated MEAP expression estimates. Analysis of exon array data from head and neck squamous cell carcinoma (HNSCC) cell lines revealed several transcripts associated with 11q13 amplification, which is related with decreased survival and metastasis in HNSCC patients. Our results demonstrate that MEAP produces reliable expression values at exon, alternatively spliced variant and gene levels, which allows generating novel experimentally testable predictions. PMID:21745820

  15. Extension of DAMAS Phased Array Processing for Spatial Coherence Determination (DAMAS-C)

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M., Jr.

    2006-01-01

    The present study reports a new development of the DAMAS microphone phased array processing methodology that allows the determination and separation of coherent and incoherent noise source distributions. In 2004, a Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decoupled the array design and processing influence from the noise being measured, using a simple and robust algorithm. In 2005, three-dimensional applications of DAMAS were examined. DAMAS has been shown to render an unambiguous quantitative determination of acoustic source position and strength. However, an underlying premise of DAMAS, as well as that of classical array beamforming methodology, is that the noise regions under study are distributions of statistically independent sources. The present development, called DAMAS-C, extends the basic approach to include coherence definition between noise sources. The solutions incorporate cross-beamforming array measurements over the survey region. While the resulting inverse problem can be large and the iteration solution computationally demanding, it solves problems no other technique can approach. DAMAS-C is validated using noise source simulations and is applied to airframe flap noise test results.

  16. A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives

    PubMed Central

    Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai

    2013-01-01

    This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124

  17. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  18. Fabrication of hybrid nanostructured arrays using a PDMS/PDMS replication process.

    PubMed

    Hassanin, H; Mohammadkhani, A; Jiang, K

    2012-10-21

    In the study, a novel and low cost nanofabrication process is proposed for producing hybrid polydimethylsiloxane (PDMS) nanostructured arrays. The proposed process involves monolayer self-assembly of polystyrene (PS) spheres, PDMS nanoreplication, thin film coating, and PDMS to PDMS (PDMS/PDMS) replication. A self-assembled monolayer of PS spheres is used as the first template. Second, a PDMS template is achieved by replica moulding. Third, the PDMS template is coated with a platinum or gold layer. Finally, a PDMS nanostructured array is developed by casting PDMS slurry on top of the coated PDMS. The cured PDMS is peeled off and used as a replica surface. In this study, the influences of the coating on the PDMS topography, contact angle of the PDMS slurry and the peeling off ability are discussed in detail. From experimental evaluation, a thickness of at least 20 nm gold layer or 40 nm platinum layer on the surface of the PDMS template improves the contact angle and eases peeling off. The coated PDMS surface is successfully used as a template to achieve the replica with a uniform array via PDMS/PDMS replication process. Both the PDMS template and the replica are free of defects and also undistorted after demoulding with a highly ordered hexagonal arrangement. In addition, the geometry of the nanostructured PDMS can be controlled by changing the thickness of the deposited layer. The simplicity and the controllability of the process show great promise as a robust nanoreplication method for functional applications. PMID:22868401

  19. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  20. A flexible implementation for Doppler radar to verify various base-band array signal processing algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Eunjung; Lee, Jonghyun; Jung, Byungwook; Chun, Joohwan

    2005-09-01

    We describe a flexible hardware system of the Doppler radar which is designed to verify various baseband array signal processing algorithms. In this work we design the Doppler radar system simulator for baseband signal processing in laboratory level. Based on this baseband signal processor, a PN-code pulse doppler radar simulator is developed. More specifically, this simulator consists of an echo signal generation part and a signal processing part. For the echo signal generation part, we use active array structure with 4 elements, and adopt baker coded PCM signal in transmission and reception for digital pulse compression. In the signal processing part, we first transform RF radar pulse to the baseband signal because we use the basebands algorithms using IF sampling. Various digital beamforming algorithms can be adopted as a baseband algorithm in our simulator. We mainly use Multiple Sidelobe Canceller (MSC) with main array antenna elements and auxiliary antenna elements as beamforming and sidelobe canceller algorithm. For Doppler filtering algorithms, we use the FFT. A control set is necessary to control overall system and to manage the timing schedule for the operation.

  1. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    NASA Astrophysics Data System (ADS)

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-02-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V-1 s-1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V-1 s-1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays.

  2. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  3. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  4. Optimization of lithography process for the fabrication of Micro-Faraday cup array

    NASA Astrophysics Data System (ADS)

    Arab, J. M.; Brahmankar, P. K.; Pawade, R. S.; Srivastava, A. K.

    2016-05-01

    Micro-faraday cup array detector (MFCAD) is used for the detection of charge of incoming ions in mass spectrometry. The optimization of complete lithography process for the fabrication of Micro-Faraday cup array detector structure in photoresist (AZ4903) on silicon substrate is reported in this work. An UV-LED based exposure system is designed for the transfer of micro-faraday cup structure on to the photoresist. The assembly consists of exposure system, collimating lens and mask/substrate holder. The fabrication process consists of coating of photoresist on Silicon substrate, designing and printing the photo mask and finally the UV lithography. These fabricated structures are characterized using optical microscope. The dimensions achieved are found to be similar as compared to the photo mask.

  5. Microlens array production in a microtechnological dry etch and reflow process for display applications

    NASA Astrophysics Data System (ADS)

    Knieling, T.; Shafi, M.; Lang, W.; Benecke, W.

    2012-03-01

    The fabrication of arrays consisting of densely ordered circular convex microlenses with diameters of 126 mum made of quartz glass in a photoresist reflow and dry etch structure transition process is demonstrated. The rectangular lens arrays with dimensions of 6 mm x 9 mm were designed for focussing collimated light on the pixel center regions of a translucent interference display, which also was produced in microtechnological process steps. The lenses focus light on pixel centers and thus serve for increasing display brightness and contrast since incoming collimated light is partially blocked by opaque metallic ring contacts at the display pixel edges. The focal lengths of the lenses lie between 0.46 mm and 2.53 mm and were adjusted by varying ratio of the selective dry etch rate of photoresist and quartz glass. Due to volume shrinking and edge line pinning of the photoresist structures the lenses curvatures emerge hyperbolic, leading to improved focussing performance.

  6. Microcavity array plasma system for remote chemical processing at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Sung; Hamaguchi, Satoshi; Sakai, Osamu; Park, Sung-Jin; Eden, J. Gary

    2012-06-01

    A microplasma system designed for chemical processing at atmospheric pressure is fabricated and characterized with flowing He/O2 gas mixtures. At the heart of this microcavity dielectric barrier discharge (MDBD) system are two arrays of half-ellipsoidal microcavities engraved by micropowder blasting into dielectric surfaces facing a flowing, low-temperature plasma. Experiments demonstrate that the ignition voltage is reduced, and the spatially averaged optical emission is doubled, for an MDBD flowing plasma array relative to an equivalent system having no microcavities. As an example of the potential of flowing atmospheric microplasma systems for chemical processing, the decomposition of methylene blue (as evidenced by decoloration at 650.2 nm) is shown to proceed at a rate as much as a factor of two greater than that for a non-microcavity equivalent.

  7. Portable nuclear material detector and process

    DOEpatents

    Hofstetter, Kenneth J; Fulghum, Charles K; Harpring, Lawrence J; Huffman, Russell K; Varble, Donald L

    2008-04-01

    A portable, hand held, multi-sensor radiation detector is disclosed. The detection apparatus has a plurality of spaced sensor locations which are contained within a flexible housing. The detection apparatus, when suspended from an elevation, will readily assume a substantially straight, vertical orientation and may be used to monitor radiation levels from shipping containers. The flexible detection array can also assume a variety of other orientations to facilitate any unique container shapes or to conform to various physical requirements with respect to deployment of the detection array. The output of each sensor within the array is processed by at least one CPU which provides information in a usable form to a user interface. The user interface is used to provide the power requirements and operating instructions to the operational components within the detection array.

  8. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  9. Development of subminiature multi-sensor hot-wire probes

    NASA Technical Reports Server (NTRS)

    Westphal, Russell V.; Ligrani, Phillip M.; Lemos, Fred R.

    1988-01-01

    Limitations on the spatial resolution of multisensor hot wire probes have precluded accurate measurements of Reynolds stresses very near solid surfaces in wind tunnels and in many practical aerodynamic flows. The fabrication, calibration and qualification testing of very small single horizontal and X-array hot-wire probes which are intended to be used near solid boundaries in turbulent flows where length scales are particularly small, is described. Details of the sensor fabrication procedure are reported, along with information needed to successfully operate the probes. As compared with conventional probes, manufacture of the subminiature probes is more complex, requiring special equipment and careful handling. The subminiature probes tested were more fragile and shorter lived than conventional probes; they obeyed the same calibration laws but with slightly larger experimental uncertainty. In spite of these disadvantages, measurements of mean statistical quantities and spectra demonstrate the ability of the subminiature sensors to provide the measurements in the near wall region of turbulent boundary layers that are more accurate than conventional sized probes.

  10. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  11. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    PubMed Central

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  12. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    PubMed

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  13. An Asynchronous Multi-Sensor Micro Control Unit for Wireless Body Sensor Networks (WBSNs)

    PubMed Central

    Chen, Chiung-An; Chen, Shih-Lun; Huang, Hong-Yi; Luo, Ching-Hsing

    2011-01-01

    In this work, an asynchronous multi-sensor micro control unit (MCU) core is proposed for wireless body sensor networks (WBSNs). It consists of asynchronous interfaces, a power management unit, a multi-sensor controller, a data encoder (DE), and an error correct coder (ECC). To improve the system performance and expansion abilities, the asynchronous interface is created for handshaking different clock domains between ADC and RF with MCU. To increase the use time of the WBSN system, a power management technique is developed for reducing power consumption. In addition, the multi-sensor controller is designed for detecting various biomedical signals. To prevent loss error from wireless transmission, use of an error correct coding technique is important in biomedical applications. The data encoder is added for lossless compression of various biomedical signals with a compression ratio of almost three. This design is successfully tested on a FPGA board. The VLSI architecture of this work contains 2.68-K gate counts and consumes power 496-μW at 133-MHz processing rate by using TSMC 0.13-μm CMOS process. Compared with the previous techniques, this work offers higher performance, more functions, and lower hardware cost than other micro controller designs. PMID:22164000

  14. Improving risk understanding across ability levels: Encouraging active processing with dynamic icon arrays.

    PubMed

    Okan, Yasmina; Garcia-Retamero, Rocio; Cokely, Edward T; Maldonado, Antonio

    2015-06-01

    Icon arrays have been found to improve risk understanding and reduce judgment biases across a wide range of studies. Unfortunately, individuals with low graph literacy experience only limited benefits from such displays. To enhance the efficacy and reach of these decision aids, the authors developed and tested 3 types of dynamic design features--that is, computerized display features that unfold over time. Specifically, the authors manipulated the sequential presentation of the different elements of icon arrays, the presence of explanatory labels indicating what was depicted in the different regions of the arrays, and the use of a reflective question followed by accuracy feedback. The first 2 features were designed to promote specific cognitive processes involved in graph comprehension, whereas the 3rd feature was designed to promote a more active, elaborative processing of risk information. Explanatory labels were effective in improving risk understanding among less graph-literate participants, whereas reflective questions resulted in large and robust performance benefits among participants with both low and high graph literacy. Theoretical and prescriptive implications are discussed. (PsycINFO Database Record PMID:25938975

  15. Phased Arrays Techniques and Split Spectrum Processing for Inspection of Thick Titanium Casting Components

    NASA Astrophysics Data System (ADS)

    Banchet, J.; Sicard, R.; Zellouf, D. E.; Chahbaz, A.

    2003-03-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability.

  16. Media processing with field-programmable gate arrays on a microprocessor's local bus

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.; Lee, Mark; Liu, Yuan-Min; McEniry, Christopher; Nwodoh, Thomas A.; Watlington, John A.

    1998-12-01

    The Chidi system is a PCI-bus media processor card which performs its processing tasks on a large field-programmable gate array (Altera 10K100) in conjunction with a general purpose CPU (PowerPC 604e). Special address-generation and buffering logic (also implemented on FPGAs) allows the reconfigurable processor to share a local bus with the CPU, turning burst accesses to memory into continuous streams and converting between the memory's 64-bit words and the media data types. In this paper we present the design requirements for the Chidi system, describe the hardware architecture, and discuss the software model for its use in media processing.

  17. Avoiding sensor blindness in Geiger mode avalanche photodiode arrays fabricated in a conventional CMOS process

    NASA Astrophysics Data System (ADS)

    Vilella, E.; Diéguez, A.

    2011-12-01

    The need to move forward in the knowledge of the subatomic world has stimulated the development of new particle colliders. However, the objectives of the next generation of colliders sets unprecedented challenges to the detector performance. The purpose of this contribution is to present a bidimensional array based on avalanche photodiodes operated in the Geiger mode to track high energy particles in future linear colliders. The bidimensional array can function in a gated mode to reduce the probability to detect noise counts interfering with real events. Low reverse overvoltages are used to lessen the dark count rate. Experimental results demonstrate that the prototype fabricated with a standard HV-CMOS process presents an increased efficiency and avoids sensor blindness by applying the proposed techniques.

  18. Process development for automated solar cell and module production. Task 4: automated array assembly

    SciTech Connect

    Hagerty, J.J.

    1980-06-30

    The scope of work under this contract involves specifying a process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use. This process sequence is then critically analyzed from a technical and economic standpoint to determine the technological readiness of each process step for implementation. The process steps are ranked according to the degree of development effort required and according to their significance to the overall process. Under this contract the steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development. Economic analysis using the SAMICS system has been performed during these studies to assure that development efforts have been directed towards the ultimate goal of price reduction. Details are given. (WHK)

  19. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    NASA Astrophysics Data System (ADS)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  20. Distinctive Order Based Self-Similarity descriptor for multi-sensor remote sensing image matching

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Ebadi, Hamid

    2015-10-01

    Robust, well-distributed and accurate feature matching in multi-sensor remote sensing image is a difficult task duo to significant geometric and illumination differences. In this paper, a robust and effective image matching approach is presented for multi-sensor remote sensing images. The proposed approach consists of three main steps. In the first step, UR-SIFT (Uniform robust scale invariant feature transform) algorithm is applied for uniform and dense local feature extraction. In the second step, a novel descriptor namely Distinctive Order Based Self Similarity descriptor, DOBSS descriptor, is computed for each extracted feature. Finally, a cross matching process followed by a consistency check in the projective transformation model is performed for feature correspondence and mismatch elimination. The proposed method was successfully applied for matching various multi-sensor satellite images as: ETM+, SPOT 4, SPOT 5, ASTER, IRS, SPOT 6, QuickBird, GeoEye and Worldview sensors, and the results demonstrate its robustness and capability compared to common image matching techniques such as SIFT, PIIFD, GLOH, LIOP and LSS.

  1. Classification of multisensor remote-sensing images by structured neural networks

    SciTech Connect

    Serpico, S.B.; Roli, F.

    1995-05-01

    This paper proposes the application of structured neural networks to classification of multisensor remote-sensing images. The purpose of the authors` approach is to allow the interpretation of the ``network behavior,`` as it can be utilized by photointerpreters for the validation of the neural classifier. In addition, their approach gives a criterion for defining the network architecture, so avoiding the classical trial-and-error process. First of all, the architecture of structured multilayer feedforward networks is tailored to a multisensor classification problem. Then, such networks are trained to solve the problem by the error backpropagation algorithm. Finally, they are transformed into equivalent networks to obtain simplified representation. The resulting equivalent networks may be interpreted as a hierarchical arrangement of ``committees`` that accomplish the classification task by checking on a set of explicit constraints on input data. Experimental results on a multisensor (optical and SAR) data set are described in terms of both classification accuracy and network interpretation. Comparisons with fully connected neural networks and with the k-nearest neighbor classifier are also made.

  2. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  3. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  4. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  5. Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.

  6. Evaluation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Array

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula

    2006-01-01

    The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.

  7. The Role of Water Vapor and Dissociative Recombination Processes in Solar Array Arc Initiation

    NASA Technical Reports Server (NTRS)

    Galofar, J.; Vayner, B.; Degroot, W.; Ferguson, D.

    2002-01-01

    Experimental plasma arc investigations involving the onset of arc initiation for a negatively biased solar array immersed in low-density plasma have been performed. Previous studies into the arc initiation process have shown that the most probable arcing sites tend to occur at the triple junction involving the conductor, dielectric and plasma. More recently our own experiments have led us to believe that water vapor is the main causal factor behind the arc initiation process. Assuming the main component of the expelled plasma cloud by weight is water, the fastest process available is dissociative recombination (H2O(+) + e(-) (goes to) H* + OH*). A model that agrees with the observed dependency of arc current pulse width on the square root of capacitance is presented. A 400 MHz digital storage scope and current probe was used to detect arcs at the triple junction of a solar array. Simultaneous measurements of the arc trigger pulse, the gate pulse, the arc current and the arc voltage were then obtained. Finally, a large number of measurements of individual arc spectra were obtained in very short time intervals, ranging from 10 to 30 microseconds, using a 1/4 a spectrometer coupled with a gated intensified CCD. The spectrometer was systematically tuned to obtain optical arc spectra over the entire wavelength range of 260 to 680 nanometers. All relevant atomic lines and molecular bands were then identified.

  8. Automatic defect detection for TFT-LCD array process using quasiconformal kernel support vector data description.

    PubMed

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms. PMID:22016625

  9. Optoelectronic signal processing for phased-array antennas; Proceedings of the Meeting, Los Angeles, CA, Jan. 12, 13, 1988

    NASA Astrophysics Data System (ADS)

    Bhasin, Kul B.; Hendrickson, Brian M.

    1988-01-01

    Papers are presented on fiber optic links for airborne satellite applications, optoelectronic techniques for broadband switching, and GaAs circuits for a monolithic optical controller. Other topics include the optical processing of covariance matrices for adaptive processors, an optical linear heterodyne matrix-vector processor, and an EHF fiber optic-based array. An adaptive optical signal processing architecture using a signed-digit number system is considered along with microwave fiber optic links for phased arrays.

  10. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  11. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-03-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  12. Fabrication of microlens arrays on a glass substrate by roll-to-roll process with PDMS mold

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Su, Guo-Dung J.

    2009-08-01

    This paper presents a roll-to-roll method to fabricate microlens arrays on a glass substrate by using a cost-effective PDMS (Polydimethylsiloxane) mold. We fabricated microlens arrays mold, which was made by photoresist(AZ4620), on the silicon substrate by thermal reflow process, and transferred the pattern to PDMS film. Roll-to-roll system is a standard printing process whose roller is made of acrylic cylinder surrounded with the PDMS mold. UV resin was chosen to be the material to make microlens in rolling process with UV light curing. We investigated the quality of microlens arrays by changing the parameters, such as embossing pressure and rolling speed, to ensure good quality of microlens arrays.

  13. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  14. Post-Processing of the Full Matrix of Ultrasonic Transmit-Receive Array Data for Guided Wave Pipe Inspection

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2009-03-01

    The paper describes a method for processing data from a guided wave transducer array on a pipe. The raw data set from such an array contains the full matrix of time-domain signals from each transmitter-receiver combination. It is shown that for certain configurations of an array the total focusing method can be applied which allows the array to be focused at every point on a pipe surface in both transmission and reception. The effect of array configuration parameters on the sensitivity of the proposed method to the random and coherent noise is discussed. Experimental results are presented using electromagnetic acoustic transducers (EMAT) for exciting and detecting the S0 Lamb wave mode in a 12 inch steel pipe at 200 kHz excitation frequency. The results show that using the imaging algorithm a 2-mm-diameter (0.08 wavelength) half-thickness hole can be detected.

  15. Phase velocity tomography of surface waves using ambient noise cross correlation and array processing

    NASA Astrophysics Data System (ADS)

    Boué, Pierre; Roux, Philippe; Campillo, Michel; Briand, Xavier

    2014-01-01

    Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross correlations to extract a particular wave component of the reconstructed signals. We focus here on the extraction of the surface waves to measure phase velocity variations with great accuracy. DBF acts as a spatial filter between two distant subarrays after cross correlation of the wavefield between each single receiver pair. During the DBF process, horizontal slowness and azimuth are used to select the wavefront on both subarray sides. DBF increases the signal-to-noise ratio, which improves the extraction of the dispersive wave packets. This combination of cross correlation and DBF is used on the Transportable Array (USArray), for the central U.S. region. A standard model of surface wave propagation is constructed from a combination of the DBF and cross correlations at different offsets and for different frequency bands. The perturbation (phase shift) between each beam and the standard model is inverted. High-resolution maps of the phase velocity of Rayleigh and Love waves are then constructed. Finally, the addition of azimuthal information provided by DBF is discussed, to construct curved rays that replace the classical great-circle path assumption.

  16. Sensor evaluation study for use with towed arrays for UXO site characterization

    SciTech Connect

    McDonald, J.R.; Robertson, R.

    1996-11-01

    The Naval Research Laboratory is developing a Multi-sensor Towed Array Detection System (MTADS) with support from the DOD Environmental Security Technology Certification Program (ESTCP). In this effort we seek to extend and refine ordnance detection technology to more efficiently characterize OEW sites, identifying nonferrous and smaller items, distinguishing ordnance from clutter and analyzing clustered targets to identify and locate individual targets within complex target fields. Our evaluation shows that these goals are best met by combining magnetic and electromagnetic sensors. We report on field studies at a prepared test range of commercial sensors in arrays in various configurations and including; Cesium vapor magnetometers in single sensor and gradiometric configurations, fluxgate gradiometers, proton procession magnetometers, and electromagnetic pulsed induction sensors. The advantages and disadvantages of each technology and their applicability based upon survey requirements is discussed. We also discuss recommended data densities including horizontal sensor spacings, survey speeds, sensor heights and make recommendations about the appropriate use of gradiometers and active sensors.

  17. Correlation of lattice defects and thermal processing in the crystallization of titania nanotube arrays

    NASA Astrophysics Data System (ADS)

    Hosseinpour, Pegah M.; Yung, Daniel; Panaitescu, Eugen; Heiman, Don; Menon, Latika; Budil, David; Lewis, Laura H.

    2014-12-01

    Titania nanotubes have the potential to be employed in a wide range of energy-related applications such as solar energy-harvesting devices and hydrogen production. As the functionality of titania nanostructures is critically affected by their morphology and crystallinity, it is necessary to understand and control these factors in order to engineer useful materials for green applications. In this study, electrochemically-synthesized titania nanotube arrays were thermally processed in inert and reducing environments to isolate the role of post-synthesis processing conditions on the crystallization behavior, electronic structure and morphology development in titania nanotubes, correlated with the nanotube functionality. Structural and calorimetric studies revealed that as-synthesized amorphous nanotubes crystallize to form the anatase structure in a three-stage process that is facilitated by the creation of structural defects. It is concluded that processing in a reducing gas atmosphere versus in an inert environment provides a larger unit cell volume and a higher concentration of Ti3+ associated with oxygen vacancies, thereby reducing the activation energy of crystallization. Further, post-synthesis annealing in either reducing or inert atmospheres produces pronounced morphological changes, confirming that the nanotube arrays thermally transform into a porous morphology consisting of a fragmented tubular architecture surrounded by a network of connected nanoparticles. This study links explicit data concerning morphology, crystallization and defects, and shows that the annealing gas environment determines the details of the crystal structure, the electronic structure and the morphology of titania nanotubes. These factors, in turn, impact the charge transport and consequently the functionality of these nanotubes as photocatalysts.

  18. Integrated multi-sensor package (IMSP) for unmanned vehicle operations

    NASA Astrophysics Data System (ADS)

    Crow, Eddie C.; Reichard, Karl; Rogan, Chris; Callen, Jeff; Seifert, Elwood

    2007-10-01

    This paper describes recent efforts to develop integrated multi-sensor payloads for small robotic platforms for improved operator situational awareness and ultimately for greater robot autonomy. The focus is on enhancements to perception through integration of electro-optic, acoustic, and other sensors for navigation and inspection. The goals are to provide easier control and operation of the robot through fusion of multiple sensor outputs, to improve interoperability of the sensor payload package across multiple platforms through the use of open standards and architectures, and to reduce integration costs by embedded sensor data processing and fusion within the sensor payload package. The solutions investigated in this project to be discussed include: improved capture, processing and display of sensor data from multiple, non-commensurate sensors; an extensible architecture to support plug and play of integrated sensor packages; built-in health, power and system status monitoring using embedded diagnostics/prognostics; sensor payload integration into standard product forms for optimized size, weight and power; and the use of the open Joint Architecture for Unmanned Systems (JAUS)/ Society of Automotive Engineers (SAE) AS-4 interoperability standard. This project is in its first of three years. This paper will discuss the applicability of each of the solutions in terms of its projected impact to reducing operational time for the robot and teleoperator.

  19. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  20. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  1. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  2. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  3. Integration of Multi-sensor Data for Desertification Monitoring

    NASA Astrophysics Data System (ADS)

    Lin, S.; Kim, J.

    2010-12-01

    dune activities can be clearly revealed. For the very detailed measurement, a terrestrial system applying close range photogrammetry will be set up in the test sites to acquire sequential images and used to generate 4D model of the dunes in future. Finally, all the outputs from the multi-sensor data will be crossly verified and compiled to model the desertification process and the consequences. A desertification combating activity which is performed by Korea-China NGO alliance has been conducted in Qubuqi desert in Nei Mongol, China. The method and system proposed above will be established and applied to monitor the dune mobility occurring in this area. The results are expected to be of great value to demonstrate the first case of remote sensing monitoring over the combat desertification activities.

  4. Ultrasound nondestructive evaluation (NDE) imaging with transducer arrays and adaptive processing.

    PubMed

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated. PMID:22368457

  5. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    PubMed Central

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-01-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems. PMID:19045902

  6. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing

  7. A laser-assisted process to produce patterned growth of vertically aligned nanowire arrays for monolithic microwave integrated devices

    NASA Astrophysics Data System (ADS)

    Van Kerckhoven, Vivien; Piraux, Luc; Huynen, Isabelle

    2016-06-01

    An experimental process for the fabrication of microwave devices made of nanowire arrays embedded in a dielectric template is presented. A pulse laser process is used to produce a patterned surface mask on alumina templates, defining precisely the wire growing areas during electroplating. This technique makes it possible to finely position multiple nanowire arrays in the template, as well as produce large areas and complex structures, combining transmission line sections with various nanowire heights. The efficiency of this process is demonstrated through the realisation of a microstrip electromagnetic band-gap filter and a substrate-integrated waveguide.

  8. A laser-assisted process to produce patterned growth of vertically aligned nanowire arrays for monolithic microwave integrated devices.

    PubMed

    Kerckhoven, Vivien Van; Piraux, Luc; Huynen, Isabelle

    2016-06-10

    An experimental process for the fabrication of microwave devices made of nanowire arrays embedded in a dielectric template is presented. A pulse laser process is used to produce a patterned surface mask on alumina templates, defining precisely the wire growing areas during electroplating. This technique makes it possible to finely position multiple nanowire arrays in the template, as well as produce large areas and complex structures, combining transmission line sections with various nanowire heights. The efficiency of this process is demonstrated through the realisation of a microstrip electromagnetic band-gap filter and a substrate-integrated waveguide. PMID:27138863

  9. A Field-Programmable Analog Array Development Platform for Vestibular Prosthesis Signal Processing

    PubMed Central

    Töreyin, Hakan; Bhatti, Pamela

    2015-01-01

    We report on a vestibular prosthesis signal processor realized using an experimental field programmable analog array (FPAA). Completing signal processing functions in the analog domain, the processor is designed to help replace a malfunctioning inner ear sensory organ, a semicircular canal. Relying on angular head motion detected by an inertial sensor, the signal processor maps angular velocity into meaningful control signals to drive a current stimulator. To demonstrate biphasic pulse control a 1 kΩ resistive load was placed across an H-bridge circuit. When connected to a 2.4 V supply, a biphasic current of 100 μA was maintained at stimulation frequencies from 50–350 Hz, pulsewidths from 25–400 μsec, and interphase gaps ranging from 25–250 sec. PMID:23853331

  10. Study of the ICP etching process on InGaAs/InP array devices

    NASA Astrophysics Data System (ADS)

    Niu, Xiaochen; Deng, Jun; Shi, Yanli; Tian, Ying; Zou, Deshu

    2014-11-01

    It was very different between the etching rate of large patterns and narrow grooves on InGaAs/InP materials by inductively coupled plasma (ICP) technology. With the aim of high etching rate, good morphology, smooth interfaces and fewer defects, the etching mechanisms of ICP via changing gas flow rate, chamber pressure and RF power have been analyzed. Some recipes have been found to achieve a narrow stripe and deep groove with good uniformity, interface and morphology via high etching rate and good selectivity. The different phenomena during etching the large patterns and narrow grooves have been explained and the sets of parameters have been summarized that is adapted to the array device on InGaAs/InP materials during the ICP process.

  11. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing

    PubMed Central

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-01-01

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as “multiple-missed detection”, and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis. PMID:26295232

  12. Enhanced Processing for a Towed Array Using an Optimal Noise Canceling Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2005-07-21

    Noise self-generated by a surface ship towing an array in search of a weak target presents a major problem for the signal processing especially if broadband techniques are being employed. In this paper we discuss the development and application of an adaptive noise canceling processor capable of extracting the weak far-field acoustic target in a noisy ocean acoustic environment. The fundamental idea for this processor is to use a model-based approach incorporating both target and ship noise. Here we briefly describe the underlying theory and then demonstrate through simulation how effective the canceller and target enhancer perform. The adaptivity of the processor not only enables the ''tracking'' of the canceller coefficients, but also the estimation of target parameters for localization. This approach which is termed ''joint'' cancellation and enhancement produces the optimal estimate of both in a minimum (error) variance sense.

  13. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  14. Remote online process measurements by a fiber optic diode array spectrometer

    SciTech Connect

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system.

  15. Multisensor optimal information fusion input white noise deconvolution estimators.

    PubMed

    Sun, Shuli

    2004-08-01

    The unified multisensor optimal information fusion criterion weighted by matrices is rederived in the linear minimum variance sense, where the assumption of normal distribution is avoided. Based on this fusion criterion, the optimal information fusion input white noise deconvolution estimators are presented for discrete time-varying linear stochastic control system with multiple sensors and correlated noises, which can be applied to seismic data processing in oil exploration. A three-layer fusion structure with fault tolerant property and reliability is given. The first fusion layer and the second fusion layer both have netted parallel structures to determine the first-step prediction error cross-covariance for the state and the estimation error cross-covariance for the input white noise between any two sensors at each time step, respectively. The third fusion layer is the fusion center to determine the optimal matrix weights and obtain the optimal fusion input white noise estimators. The simulation results for Bernoulli-Gaussian input white noise deconvolution estimators show the effectiveness. PMID:15462453

  16. Multisensor based robotic manipulation in an uncalibrated manufacturing workcell

    SciTech Connect

    Ghosh, B.K.; Xiao, Di; Xi, Ning; Tarn, Tzyh-Jong

    1997-12-31

    The main problem that we address in this paper is how a robot manipulator is able to track and grasp a part placed arbitrarily on a moving disc conveyor aided by a single CCD camera and fusing information from encoders placed on the conveyor and also from encoders on the robot manipulator. The important assumption that distinguishes our work from what has been previously reported in the literature is that the position and orientation of the camera and the base frame of the robot is apriori assumed to be unknown and is `visually calibrated` during the operation of the manipulator. Moreover the part placed on the conveyor is assumed to be non-planar, i.e. the feature points observed on the part is assumed to be located arbitrarily in IR{sup 3}. The novelties of the proposed approach in this paper includes a (i) multisensor fusion scheme based on complementary data for the purpose of part localization, and (ii) self-calibration between the turntable and the robot manipulator using visual data and feature points on the end-effector. The principle advantages of the proposed scheme are the following. (i) It renders possible to reconfigure a manufacturing workcell without recalibrating the relation between the turntable and the robot. This significantly shortens the setup time of the workcell. (ii) It greatly weakens the requirement on the image processing speed.

  17. Airborne multisensor system for the autonomous detection of land mines

    NASA Astrophysics Data System (ADS)

    Scheerer, Klaus

    1997-07-01

    A concept of a modular multisensor system for use on an airborne platform is presented. THe sensor system comprises two high resolution IR sensors working in the mid and far IR spectral regions, a RGB video camera with its sensitivity extended to the near IR in connection with a laser illuminator, and a radar with a spatial resolution adapted to the expected mine sizes. The sensor concept emerged from the evaluation of comprehensive static and airborne measurements on numerous buried and unburied mines. The measurements were performed on single mines and on minefields, layed down according to military requirements. The system has an on-board realtime image processing capability and is intended to operate autonomously with a data link to a mobile groundstation. Data from a navigation unit serve to transform the location of identified mines into a geodetic coordinate system. The system will be integrated into a cylindrical structure of about 40 cm diameter. This may be a drone or simply a tube which can be mounted on any carrier whatever. The realization of a simplified demonstrator for captive flight tests is planned by 1998.

  18. Adaptive multi-sensor integration for mine detection

    SciTech Connect

    Baker, J.E.

    1997-05-01

    State-of-the-art in multi-sensor integration (MSI) application involves extensive research and development time to understand and characterize the application domain; to determine and define the appropriate sensor suite; to analyze, characterize, and calibrate the individual sensor systems; to recognize and accommodate the various sensor interactions; and to develop and optimize robust merging code. Much of this process can benefit from adaptive learning, i.e., an output-based system can take raw sensor data and desired merged results as input and adaptively develop/determine an effective method if interpretation and merger. This approach significantly reduces the time required to apply MSI to a given application, while increasing the quality of the final result and provides a quantitative measure for comparing competing MSI techniques and sensor suites. The ability to automatically develop and optimize MSI techniques for new sensor suites and operating environments makes this approach well suited to the detection of mines and mine-like targets. Perhaps more than any other, this application domain is characterized by diverse, innovative, and dynamic sensor suites, whose nature and interactions are not yet well established. This paper presents such an outcome-based multi-image analysis system. An empirical evaluation of its performance and its application, sensor and domain robustness is presented.

  19. Information Space Receding Horizon Control for Multisensor Tasking Problems.

    PubMed

    Sunberg, Zachary; Chakravorty, Suman; Erwin, Richard Scott

    2016-06-01

    In this paper, we present a receding horizon solution to the problem of optimal scheduling for multiple sensors monitoring a group of dynamical targets. The term target is used here in the classic sense of being the object that is being sensed or observed by the sensors. This problem is motivated by the space situational awareness (SSA) problem. The multisensor optimal scheduling problem can be posed as a multiagent Markov decision process on the information space which has a dynamic programming (DP) solution. We present a simulation-based stochastic optimization technique that exploits the structure inherent in the problem to obtain variance reduction along with a distributed solution. This stochastic optimization technique is combined with a receding horizon approach which uses online solution of the control problems to obviate the need to solve the computationally intractable multiagent information space DP problem and hence, makes the technique computationally tractable. The technique is tested on a moderate scale SSA example which is nonetheless computationally intractable for existing solution techniques. PMID:26259208

  20. Information fusion and uncertainty management for biological multisensor systems

    NASA Astrophysics Data System (ADS)

    Braun, Jerome J.; Glina, Yan; Stein, David W.; Skomoroch, Peter N.; Fox, Emily B.

    2005-03-01

    This paper investigates methods of decision-making from uncertain and disparate data. The need for such methods arises in those sensing application areas in which multiple and diverse sensing modalities are available, but the information provided can be imprecise or only indirectly related to the effects to be discerned. Biological sensing for biodefense is an important instance of such applications. Information fusion in that context is the focus of a research program now underway at MIT Lincoln Laboratory. The paper outlines a multi-level, multi-classifier recognition architecture developed within this program, and discusses its components. Information source uncertainty is quantified and exploited for improving the quality of data that constitute the input to the classification processes. Several methods of sensor uncertainty exploitation at the feature-level are proposed and their efficacy is investigated. Other aspects of the program are discussed as well. While the primary focus of the paper is on biodefense, the applicability of concepts and techniques presented here extends to other multisensor fusion application domains.

  1. Multi-Sensor Aerosol Products Sampling System

    NASA Technical Reports Server (NTRS)

    Petrenko, M.; Ichoku, C.; Leptoukh, G.

    2011-01-01

    Global and local properties of atmospheric aerosols have been extensively observed and measured using both spaceborne and ground-based instruments, especially during the last decade. Unique properties retrieved by the different instruments contribute to an unprecedented availability of the most complete set of complimentary aerosol measurements ever acquired. However, some of these measurements remain underutilized, largely due to the complexities involved in analyzing them synergistically. To characterize the inconsistencies and bridge the gap that exists between the sensors, we have established a Multi-sensor Aerosol Products Sampling System (MAPSS), which consistently samples and generates the spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of aerosol products from multiple spacebome sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS. Samples of satellite aerosol products are extracted over Aerosol Robotic Network (AERONET) locations as well as over other locations of interest such as those with available ground-based aerosol observations. In this way, MAPSS enables a direct cross-characterization and data integration between Level-2 aerosol observations from multiple sensors. In addition, the available well-characterized co-located ground-based data provides the basis for the integrated validation of these products. This paper explains the sampling methodology and concepts used in MAPSS, and demonstrates specific examples of using MAPSS for an integrated analysis of multiple aerosol products.

  2. Weighted measurement fusion Kalman estimator for multisensor descriptor system

    NASA Astrophysics Data System (ADS)

    Dou, Yinfeng; Ran, Chenjian; Gao, Yuan

    2016-08-01

    For the multisensor linear stochastic descriptor system with correlated measurement noises, the fused measurement can be obtained based on the weighted least square (WLS) method, and the reduced-order state components are obtained applying singular value decomposition method. Then, the multisensor descriptor system is transformed to a fused reduced-order non-descriptor system with correlated noise. And the weighted measurement fusion (WMF) Kalman estimator of this reduced-order subsystem is presented. According to the relationship of the presented non-descriptor system and the original descriptor system, the WMF Kalman estimator and its estimation error variance matrix of the original multisensor descriptor system are presented. The presented WMF Kalman estimator has global optimality, and can avoid computing these cross-variances of the local Kalman estimator, compared with the state fusion method. A simulation example about three-sensors stochastic dynamic input and output systems in economy verifies the effectiveness.

  3. Investigation on fabrication process of dissolving microneedle arrays to improve effective needle drug distribution.

    PubMed

    Wang, Qingqing; Yao, Gangtao; Dong, Pin; Gong, Zihua; Li, Ge; Zhang, Kejian; Wu, Chuanbin

    2015-01-23

    The dissolving microneedle array (DMNA) offers a novel potential approach for transdermal delivery of biological macromolecular drugs and vaccines, because it can be as efficient as hypodermic injection and as safe and patient compliant as conventional transdermal delivery. However, effective needle drug distribution is the main challenge for clinical application of DMNA. This study focused on the mechanism and control of drug diffusion inside DMNA during the fabrication process in order to improve the drug delivery efficiency. The needle drug loading proportion (NDP) in DMNAs was measured to determine the influences of drug concentration gradient, needle drying step, excipients, and solvent of the base solution on drug diffusion and distribution. The results showed that the evaporation of base solvent was the key factor determining NDP. Slow evaporation of water from the base led to gradual increase of viscosity, and an approximate drug concentration equilibrium was built between the needle and base portions, resulting in NDP as low as about 6%. When highly volatile ethanol was used as the base solvent, the viscosity in the base rose quickly, resulting in NDP more than 90%. Ethanol as base solvent did not impact the insertion capability of DMNAs, but greatly increased the in vitro drug release and transdermal delivery from DMNAs. Furthermore, the drug diffusion process during DMNA fabrication was thoroughly investigated for the first time, and the outcomes can be applied to most two-step molding processes and optimization of the DMNA fabrication. PMID:25446513

  4. Optical characteristics of a PbS detector array spectrograph for online process monitoring

    NASA Astrophysics Data System (ADS)

    Kansakoski, Markku; Malinen, Jouko

    1999-02-01

    The use of optical spectroscopic methods for quantitative composition measurements in the field of process control is increasing rapidly. Various optical configurations are already in use or are being developed, with the aim of accomplishing the wavelength selectivity needed in spectroscopic measurement. The development of compact and rugged spectrometers for process monitoring applications, has been one of the major tasks for the optical measurements research team at VTT Electronics. A new PbS detector array- based spectrometer unit has now been developed for use in process analyzers, providing 24-wavelengths ranging from 1350 to 2400 nm. Extensive testing has been carried out to examine the performance of the developed units, concerning performance in normal operating conditions, characteristics vs. temperature, unit-to-unit variation and preliminary environmental testing. The main performance characteristics of the developed spectrometer unit include stable output, a band center wavelength (CW) unit-to-unit tracking better than -+ 1 nm, a band CW draft vs. operating temperature less than 1.8 nm in the temperature range +10 degree(s)C...+50 degree(s)C, and optical stray light below 0.1 percent. The combination of technical performance, small size, rugged construction, and potential for medium manufacturing cost (4000-5000 dollars in quantities) make the developed unit a promising alternative in developing competitive high-performance analyzers for various NIR applications.

  5. Design and analysis of a multi-sensor deformation detection system

    NASA Astrophysics Data System (ADS)

    Szostak-Chrzanowski, Anna; Chrzanowski, Adam; Deng, Nianwu; Bazanowski, Maciej

    2008-12-01

    Development of new technologies for monitoring structural and ground deformations puts new demands on the design and analysis of the multi-sensor systems. Design and analysis of monitoring schemes require a good understanding of the physical process that leads to deformation. Deterministic modelling of the load-deformation relationship provides information on the magnitude and location of expected critical deformations as well as delineates the deformation zone. By combining results of deterministic modelling with geometrical analysis, one can find the deformation mechanism and explain the cause of deformation in case of irregular behaviour of the investigated object. The concept is illustrated by four practical examples.

  6. Planetary rover navigation: improving visual odometry via additional images and multisensor fusion

    NASA Astrophysics Data System (ADS)

    Casalino, G.; Zereik, E.; Simetti, E.; Turetta, A.; Torelli, S.; Sperindé, A.

    2013-12-01

    Visual odometry (VO) is very important for a mobile robot, above all in a planetary scenario, to accurately estimate the rover occurred motion. The present work deals with the possibility to improve a previously developed VO technique by means of additional image processing, together with suitable mechanisms such as the classical Extended/Iterated Kalman Filtering and also Sequence Estimators. The possible employment of both techniques is then addressed and, consequently, a better behaving integration scheme is proposed. Moreover, the eventuality of exploiting other localization sensors is also investigated, leading to a final multisensor scheme.

  7. Reduction of mine suspected areas by multisensor airborne measurements: first results

    NASA Astrophysics Data System (ADS)

    Keller, Martin; Milisavljevic, Nada; Suess, Helmut; Acheroy, Marc P. J.

    2002-08-01

    Humanitarian demining is a very dangerous, cost and time intensive work, where a lot of effort is usually wasted in inspecting suspected areas that turn out to be mine-free. The main goal of the project SMART (Space and airborne Mined Area Reduction Tools) is to apply a multisensor approach towards corresponding signature data collection, developing adapted data understanding and data processing tools for improving the efficiency and reliability of level 1 minefield surveys by reducing suspected mined areas. As a result, the time for releasing mine-free areas for civilian use should be shortened. In this paper, multisensor signature data collected at four mine suspected areas in different parts of Croatia are presented, their information content is discussed, and first results are described. The multisensor system consists of a multifrequency multipolarization SAR system (DLR Experimental Synthetic Aperture Radar E-SAR), an optical scanner (Daedalus) and a camera (RMK) for color infrared aerial views. E-SAR data were acquired in X-, C-, L- and P- bands, the latter two being fully polarimetric interferometric. This provides pieces of independent information, ranging from high spatial resolution (X-band) to very good penetration abilities (P-band), together with possibilities for polarimetric and interferometric analysis. The Daedalus scanner, with 12 channels between visible and thermal infrared, has a very high spatial resolution. For each of the sensors, the applied processing, geocoding and registration is described. The information content is analyzed in sense of the capability and reliability in describing conditions inside suspected mined areas, as a first step towards identifying their mine-free parts, with special emphasis set on polarimetric and interferometric information.

  8. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    NASA Astrophysics Data System (ADS)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  9. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  10. Investigation of Proposed Process Sequence for the Array Automated Assembly Task, Phase 2. [low cost silicon solar array fabrication

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Bunyan, S.; Pepe, A.

    1979-01-01

    The technological readiness of the proposed process sequence was reviewed. Process steps evaluated include: (1) plasma etching to establish a standard surface; (2) forming junctions by diffusion from an N-type polymeric spray-on source; (3) forming a p+ back contact by firing a screen printed aluminum paste; (4) forming screen printed front contacts after cleaning the back aluminum and removing the diffusion oxide; (5) cleaning the junction by a laser scribe operation; (6) forming an antireflection coating by baking a polymeric spray-on film; (7) ultrasonically tin padding the cells; and (8) assembling cell strings into solar circuits using ethylene vinyl acetate as an encapsulant and laminating medium.

  11. Biologically inspired large scale chemical sensor arrays and embedded data processing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  12. Advancements in fabrication process of microelectrode array for a retinal prosthesis using Liquid Crystal Polymer (LCP).

    PubMed

    Jeong, Joonsoo; Shin, Soowon; Lee, Geun Jae; Gwon, Tae Mok; Park, Jeong Hoan; Kim, Sung June

    2013-01-01

    Liquid Crystal Polymer (LCP) has been considered as an alternative biomaterial for implantable biomedical devices primarily for its low moisture absorption rate compared with conventional polymers such as polyimide, parylene and silicone elastomers. A novel retinal prosthetic device based on monolithic encapsulation of LCP is being developed in which entire neural stimulation circuitries are integrated into a thin and eye-conformable structure. Micromachining techniques for fabrication of a LCP retinal electrode array have been previously reported. In this research, however, for being used as a part of the LCP-based retinal implant, we developed advanced fabrication process of LCP retinal electrode through new approaches such as electroplating and laser-machining in order to achieve higher mechanical robustness, long-term reliability and flexibility. Thickened metal tracks could contribute to higher mechanical strength as well as higher long-term reliability when combined with laser-ablation process by allowing high-pressure lamination. Laser-thinning technique could improve the flexibility of LCP electrode. PMID:24110931

  13. Subspace array processing using spatial time-frequency distributions: applications for denoising structural echoes of elastic targets.

    PubMed

    Sabra, Karim G; Anderson, Shaun D

    2014-05-01

    Structural echoes of underwater elastic targets, used for detection and classification purposes, can be highly localized in the time-frequency domain and can be aspect-dependent. Hence such structural echoes recorded along a distributed (synthetic) aperture, e.g., using a moving receiver platform, would not meet the stationarity and multiple snapshots requirements of common subspace array processing methods used for denoising array data based on their estimated covariance matrix. To address this issue, this article introduces a subspace array processing method based on the space-time-frequency distribution (STFD) of single-snapshots of non-stationary signals. This STFD is obtained by computing Cohen's class time-frequency distributions between all pairwise combination of the recorded signals along an arbitrary aperture array. This STFD is interpreted as a generalized array covariance matrix which automatically accounts for the inherent coherence across the time-frequency plane of the received nonstationary echoes emanating from the same target. Hence, identifying the signal's subspace from the eigenstructure of this STFD provides a means for denoising these non-stationary structural echoes by spreading the clutter and noise power in the time-frequency domain; as demonstrated here numerically and experimentally using the structural echoes of a thin steel spherical shell measured along a synthetic aperture. PMID:24815264

  14. Near real-time, on-the-move multisensor integration and computing framework

    NASA Astrophysics Data System (ADS)

    Burnette, Chris; Schneider, Matt; Agarwal, Sanjeev; Deterline, Diane; Geyer, Chris; Phan, Chung D.; Lydic, Richard M.; Green, Kevin; Swett, Bruce

    2015-05-01

    Implanted mines and improvised devices are a persistent threat to Warfighters. Current Army countermine missions for route clearance need on-the-move standoff detection to improve the rate of advance. Vehicle-based forward looking sensors such as electro-optical and infrared (EO/IR) devices can be used to identify potential threats in near real-time (NRT) at safe standoff distance to support route clearance missions. The MOVERS (Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System) is a vehicle-based multi-sensor integration and exploitation system that ingests and processes video and imagery data captured from forward-looking EO/IR and thermal sensors, and also generates target/feature alerts, using the Video Processing and Exploitation Framework (VPEF) "plug and play" video processing toolset. The MOVERS Framework provides an extensible, flexible, and scalable computing and multi-sensor integration GOTS framework that enables the capability to add more vehicles, sensors, processors or displays, and a service architecture that provides low-latency raw video and metadata streams as well as a command and control interface. Functionality in the framework is exposed through the MOVERS SDK which decouples the implementation of the service and client from the specific communication protocols.

  15. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  16. Planarized process for resonant leaky-wave coupled phase-locked arrays of mid-IR quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Chang, C.-C.; Kirch, J. D.; Boyle, C.; Sigler, C.; Mawst, L. J.; Botez, D.; Zutter, B.; Buelow, P.; Schulte, K.; Kuech, T.; Earles, T.

    2015-03-01

    On-chip resonant leaky-wave coupling of quantum cascade lasers (QCLs) emitting at 8.36 μm has been realized by selective regrowth of interelement layers in curved trenches, defined by dry and wet etching. The fabricated structure provides large index steps (Δn = 0.10) between antiguided-array element and interelement regions. In-phase-mode operation to 5.5 W front-facet emitted power in a near-diffraction-limited far-field beam pattern, with 4.5 W in the main lobe, is demonstrated. A refined fabrication process has been developed to produce phased-locked antiguided arrays of QCLs with planar geometry. The main fabrication steps in this process include non-selective regrowth of Fe:InP in interelement trenches, defined by inductive-coupled plasma (ICP) etching, a chemical polishing (CP) step to planarize the surface, non-selective regrowth of interelement layers, ICP selective etching of interelement layers, and non-selective regrowth of InP cladding layer followed by another CP step to form the element regions. This new process results in planar InGaAs/InP interelement regions, which allows for significantly improved control over the array geometry and the dimensions of element and interelement regions. Such a planar process is highly desirable to realize shorter emitting wavelength (4.6 μm) arrays, where fabrication tolerance for single-mode operation are tighter compared to 8 μm-emitting devices.

  17. CMOS array of photodiodes with electronic processing for 3D optical reconstruction

    NASA Astrophysics Data System (ADS)

    Hornero, Gemma; Montane, Enric; Chapinal, Genis; Moreno, Mauricio; Herms, Atila

    2001-04-01

    It is well known that laser time-of-flight (TOF) and optical triangulation are the most useful optical techniques for distance measurements. The first one is more suitable for large distances, since for short range of distances high modulation frequencies of laser diodes (©200-500MHz) are needed. For these ranges, optical triangulation is simpler, as it is only necessary to read the projection of the laser point over a linear optical sensor without any laser modulation. Laser triangulation is based on the rotation of the object. This motion shifts the projected point over the linear sensor, resulting on 3D information, by means of the whole readout of the linear sensor in each angle position. On the other hand, a hybrid method of triangulation and TOF can be implemented. In this case, a synchronized scanning of a laser beam over the object results in different arrival times of light to each pixel. The 3D information is carried by these delays. Only a single readout of the linear sensor is needed. In this work we present the design of two different linear arrays of photodiodes in CMOS technology, the first one based on the Optical triangulation measurement and the second one based in this hybrid method (TFO). In contrast to PSD (Position Sensitive Device) and CCDs, CMOS technology can include, on the same chip, photodiodes, control and processing electronics, that in the other cases should be implemented with external microcontrollers.

  18. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  19. Array processing for RFID tag localization exploiting multi-frequency signals

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Li, Xin; Amin, Moeness G.

    2009-05-01

    RFID is an increasingly valuable business and technology tool for electronically identifying, locating, and tracking products, assets, and personnel. As a result, precise positioning and tracking of RFID tags and readers have received considerable attention from both academic and industrial communities. Finding the position of RFID tags is considered an important task in various real-time locating systems (RTLS). As such, numerous RFID localization products have been developed for various applications. The majority of RFID positioning systems is based on the fusion of pieces of relevant information, such as the range and the direction-of-arrival (DOA). For example, trilateration can determine the tag position by using the range information of the tag estimated from three or more spatially separated reader antennas. Triangulation is another method to locate RFID tags that use the direction-of-arrival (DOA) information estimated at multiple spatially separated locations. The RFID tag positions can also be determined through hybrid techniques that combine the range and DOA information. The focus of this paper to study the design and performance of the localization of passive RFID tags using array processing techniques in a multipath environment, and exploiting multi-frequency CW signals. The latter are used to decorrelate the coherent multipath signals for effective DOA estimation and for the purpose of accurate range estimation. Accordingly, the spatial and frequency dimensionalities are fully utilized for robust and accurate positioning of RFID tags.

  20. A multi-sensor system for robotics proximity operations

    NASA Technical Reports Server (NTRS)

    Cheatham, J. B.; Wu, C. K.; Weiland, P. L.; Cleghorn, T. F.

    1988-01-01

    Robots without sensors can perform only simple repetitive tasks and cannot cope with unplanned events. A multi-sensor system is needed for a robot to locate a target, move into its neighborhood and perform operations in contact with the object. Systems that can be used for such tasks are described.

  1. CRITICAL OVERVIEW OF THE PERFORMANCE OF A MULTISENSOR CAPACITANCE SYSTEM

    Technology Transfer Automated Retrieval System (TEKTRAN)

    During the last decade major advances have been made in capacitance based sensor technology that enhanced our ability to measure soil water content in the soil plant atmosphere system. Multisensor capacitance systems (MCS) took the lead in this regards. This objectives of the current work are to c...

  2. The multisensor PHD filter: II. Erroneous solution via Poisson magic

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald

    2009-05-01

    The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.

  3. Enhanced research in ground-penetrating radar and multisensor fusion with application to the detection and visualization of buried waste. Final report

    SciTech Connect

    Devney, A.J.; DiMarzio, C.; Kokar, M.; Miller, E.L.; Rappaport, C.M.; Weedon, W.H.

    1996-05-14

    Recognizing the difficulty and importance of the landfill remediation problems faced by DOE, and the fact that no one sensor alone can provide complete environmental site characterization, a multidisciplinary team approach was chosen for this project. The authors have developed a multisensor fusion approach that is suitable for the wide variety of sensors available to DOE, that allows separate detection algorithms to be developed and custom-tailored to each sensor. This approach is currently being applied to the Geonics EM-61 and Coleman step-frequency radar data. High-resolution array processing techniques were developed for detecting and localizing buried waste containers. A soil characterization laboratory facility was developed using a HP-8510 network analyzer and near-field coaxial probe. Both internal and external calibration procedures were developed for de-embedding the frequency-dependent soil electrical parameters from the measurements. Dispersive soil propagation modeling algorithms were also developed for simulating wave propagation in dispersive soil media. A study was performed on the application of infrared sensors to the landfill remediation problem, particularly for providing information on volatile organic compounds (VOC`s) in the atmosphere. A dust-emission lidar system is proposed for landfill remediation monitoring. Design specifications are outlined for a system which could be used to monitor dust emissions in a landfill remediation effort. The detailed results of the investigations are contained herein.

  4. Solution-Processed Organic Thin-Film Transistor Array for Active-Matrix Organic Light-Emitting Diode

    NASA Astrophysics Data System (ADS)

    Harada, Chihiro; Hata, Takuya; Chuman, Takashi; Ishizuka, Shinichi; Yoshizawa, Atsushi

    2013-05-01

    We developed a 3-in. organic thin-film transistor (OTFT) array with an ink-jetted organic semiconductor. All layers except electrodes were fabricated by solution processes. The OTFT performed well without hysteresis, and the field-effect mobility in the saturation region was 0.45 cm2 V-1 s-1, the threshold voltage was 3.3 V, and the on/off current ratio was more than 106. We demonstrated a 3-in. active-matrix organic light-emitting diode (AMOLED) display driven by the OTFT array. The display could provide clear moving images. The peak luminance of the display was 170 cd/m2.

  5. Digital pixel CMOS focal plane array with on-chip multiply accumulate units for low-latency image processing

    NASA Astrophysics Data System (ADS)

    Little, Jeffrey W.; Tyrrell, Brian M.; D'Onofrio, Richard; Berger, Paul J.; Fernandez-Cull, Christy

    2014-06-01

    A digital pixel CMOS focal plane array has been developed to enable low latency implementations of image processing systems such as centroid trackers, Shack-Hartman wavefront sensors, and Fitts correlation trackers through the use of in-pixel digital signal processing (DSP) and generic parallel pipelined multiply accumulate (MAC) units. Light intensity digitization occurs at the pixel level, enabling in-pixel DSP and noiseless data transfer from the pixel array to the peripheral processing units. The pipelined processing of row and column image data prior to off chip readout reduces the required output bandwidth of the image sensor, thus reducing the latency of computations necessary to implement various image processing systems. Data volume reductions of over 80% lead to sub 10μs latency for completing various tracking and sensor algorithms. This paper details the architecture of the pixel-processing imager (PPI) and presents some initial results from a prototype device fabricated in a standard 65nm CMOS process hybridized to a commercial off-the-shelf short-wave infrared (SWIR) detector array.

  6. Introducing multisensor satellite radiance-based evaluation for regional Earth System modeling

    NASA Astrophysics Data System (ADS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; Aires, F.

    2014-07-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  7. Urban structure analysis of mega city Mexico City using multisensoral remote sensing data

    NASA Astrophysics Data System (ADS)

    Taubenböck, H.; Esch, T.; Wurm, M.; Thiel, M.; Ullmann, T.; Roth, A.; Schmidt, M.; Mehl, H.; Dech, S.

    2008-10-01

    Mega city Mexico City is ranked the third largest urban agglomeration to date around the globe. The large extension as well as dynamic urban transformation and sprawl processes lead to a lack of up-to-date and area-wide data and information to measure, monitor, and understand the urban situation. This paper focuses on the capabilities of multisensoral remotely sensed data to provide a broad range of products derived from one scientific field - remote sensing - to support urban managing and planning. Therefore optical data sets from the Landsat and Quickbird sensors as well as radar data from the Shuttle Radar Topography Mission (SRTM) and the TerraSAR-X sensor are utilised. Using the multi-sensoral data sets the analysis are scale-dependent. On the one hand change detection on city level utilising the derived urban footprints enables to monitor and to assess spatiotemporal urban transformation, areal dimension of urban sprawl, its direction, and the built-up density distribution over time. On the other hand, structural characteristics of an urban landscape - the alignment and types of buildings, streets and open spaces - provide insight in the very detailed physical pattern of urban morphology on higher scale. The results show high accuracies of the derived multi-scale products. The multi-scale analysis allows quantifying urban processes and thus leading to an assessment and interpretation of urban trends.

  8. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; Aires, F.

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  9. Sub-band processing for grating lobe disambiguation in sparse arrays

    NASA Astrophysics Data System (ADS)

    Hersey, Ryan K.; Culpepper, Edwin

    2014-06-01

    Combined synthetic aperture radar (SAR) and ground moving target indication (GMTI) radar modes simultaneously generate SAR and GMTI products from the same radar data. This hybrid mode provides the benefit of combined imaging and moving target displays as well as improved target recognition. However, the differing system, antenna, and waveform requirements between SAR and GMTI modes make implementing the hybrid mode challenging. The Air Force Research Laboratory (AFRL) Gotcha radar has collected wide-bandwidth, multi-channel data that can be used for both SAR and GMTI applications. The spatial channels on the Gotcha array are sparsely separated, which causes spatial grating lobes during the digital beamforming process. Grating lobes have little impact on SAR, which typically uses a single spatial channel. However, grating lobes have a large impact on GMTI, where spatial channels are used to mitigate clutter and estimate the target angle of arrival (AOA). The AOA ambiguity has a significant impact in the Gotcha data, where detections from the sidelobes and skirts of the mainlobe wrap back into the main scene causing a significant number of false alarms. This paper presents a sub-banding method to disambiguate grating lobes in the GMTI processing. This method divides the wideband SAR data into multiple frequency sub-bands. Since each sub-band has a different center frequency, the grating lobes for each sub-band appear at different angles. The method uses this variation to disambiguate target returns and places them at the correct angle of arrival (AOA). Results are presented using AFRL Gotcha radar data.

  10. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  11. Developing Smart Seismic Arrays: A Simulation Environment, Observational Database, and Advanced Signal Processing

    SciTech Connect

    Harben, P E; Harris, D; Myers, S; Larsen, S; Wagoner, J; Trebes, J; Nelson, K

    2003-09-15

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3D finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic

  12. Mapping acoustic emissions from hydraulic fracture treatments using coherent array processing: Concept

    SciTech Connect

    Harris, D.B.; Sherwood, R.J.; Jarpe, S.P.; Harben, P.E.

    1991-09-01

    Hydraulic fracturing is a widely-used well completion technique for enhancing the recovery of gas and oil in low-permeability formations. Hydraulic fracturing consists of pumping fluids into a well under high pressure (1000--5000 psi) to wedge-open and extend a fracture into the producing formation. The fracture acts as a conduit for gas and oil to flow back to the well, significantly increasing communication with larger volumes of the producing formation. A considerable amount of research has been conducted on the use of acoustic (microseismic) emission to delineate fracture growth. The use of transient signals to map the location of discrete sites of emission along fractures has been the focus of most research on methods for delineating fractures. These methods depend upon timing the arrival of compressional (P) or shear (S) waves from discrete fracturing events at one or more clamped geophones in the treatment well or in adjacent monitoring wells. Using a propagation model, the arrival times are used to estimate the distance from each sensor to the fracturing event. Coherent processing methods appear to have sufficient resolution in the 75 to 200 Hz band to delineate the extent of fractures induced by hydraulic fracturing. The medium velocity structure must be known with a 10% accuracy or better and no major discontinuities should be undetected. For best results, the receiving array must be positioned directly opposite the perforations (same depths) at a horizontal range of 200 to 400 feet from the region to be imaged. Sources of acoustic emission may be detectable down to a single-sensor SNR of 0.25 or somewhat less. These conclusions are limited by the assumptions of this study: good coupling to the formation, acoustic propagation, and accurate knowledge of the velocity structure.

  13. RheoStim: Development of an Adaptive Multi-Sensor to Prevent Venous Stasis.

    PubMed

    Weyer, Sören; Weishaupt, Fabio; Kleeberg, Christian; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    Chronic venous insufficiency of the lower limbs is often underestimated and, in the absence of therapy, results in increasingly severe complications, including therapy-resistant tissue defects. Therefore, early diagnosis and adequate therapy is of particular importance. External counter pulsation (ECP) therapy is a method used to assist the venous system. The main principle of ECP is to squeeze the inner leg vessels by muscle contractions, which are evoked by functional electrical stimulation. A new adaptive trigger method is proposed, which improves and supplements the current therapeutic options by means of pulse synchronous electro-stimulation of the muscle pump. For this purpose, blood flow is determined by multi-sensor plethysmography. The hardware design and signal processing of this novel multi-sensor plethysmography device are introduced. The merged signal is used to determine the phase of the cardiac cycle, to ensure stimulation of the muscle pump during the filling phase of the heart. The pulse detection of the system is validated against a gold standard and provides a sensitivity of 98% and a false-negative rate of 2% after physical exertion. Furthermore, flow enhancement of the system has been validated by duplex ultrasonography. The results show a highly increased blood flow in the popliteal vein at the knee. PMID:27023544

  14. RheoStim: Development of an Adaptive Multi-Sensor to Prevent Venous Stasis

    PubMed Central

    Weyer, Sören; Weishaupt, Fabio; Kleeberg, Christian; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    Chronic venous insufficiency of the lower limbs is often underestimated and, in the absence of therapy, results in increasingly severe complications, including therapy-resistant tissue defects. Therefore, early diagnosis and adequate therapy is of particular importance. External counter pulsation (ECP) therapy is a method used to assist the venous system. The main principle of ECP is to squeeze the inner leg vessels by muscle contractions, which are evoked by functional electrical stimulation. A new adaptive trigger method is proposed, which improves and supplements the current therapeutic options by means of pulse synchronous electro-stimulation of the muscle pump. For this purpose, blood flow is determined by multi-sensor plethysmography. The hardware design and signal processing of this novel multi-sensor plethysmography device are introduced. The merged signal is used to determine the phase of the cardiac cycle, to ensure stimulation of the muscle pump during the filling phase of the heart. The pulse detection of the system is validated against a gold standard and provides a sensitivity of 98% and a false-negative rate of 2% after physical exertion. Furthermore, flow enhancement of the system has been validated by duplex ultrasonography. The results show a highly increased blood flow in the popliteal vein at the knee. PMID:27023544

  15. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety.

    PubMed

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  16. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    PubMed Central

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  17. Processing of chemical sensor arrays with a biologically inspired model of olfactory coding.

    PubMed

    Raman, Baranidharan; Sun, Ping A; Gutierrez-Galvez, Agustin; Gutierrez-Osuna, Ricardo

    2006-07-01

    This paper presents a computational model for chemical sensor arrays inspired by the first two stages in the olfactory pathway: distributed coding with olfactory receptor neurons and chemotopic convergence onto glomerular units. We propose a monotonic concentration-response model that maps conventional sensor-array inputs into a distributed activation pattern across a large population of neuroreceptors. Projection onto glomerular units in the olfactory bulb is then simulated with a self-organizing model of chemotopic convergence. The pattern recognition performance of the model is characterized using a database of odor patterns from an array of temperature modulated chemical sensors. The chemotopic code achieved by the proposed model is shown to improve the signal-to-noise ratio available at the sensor inputs while being consistent with results from neurobiology. PMID:16856663

  18. Facile and flexible fabrication of gapless microlens arrays using a femtosecond laser microfabrication and replication process

    NASA Astrophysics Data System (ADS)

    Liu, Hewei; Chen, Feng; Yang, Qing; Hu, Yang; Shan, Chao; He, Shengguan; Si, Jinhai; Hou, Xun

    2012-03-01

    We demonstrate a facile and flexible method to fabricate close-packed microlens arrays (MLAs). Glass molding templates with concave structures are produced by a femtosecond (fs)-laser point-by-point exposures followed by a chemical treatment, and convex MLAs are subsequently replicated on Poly(methyl methacrylate) [PMMA] using a hot embossing system. As an example, a microlens array (MLA) with 60-μm rectangular-shaped spherical microlenses is fabricated. Optical performances of the MLAs, such as focusing and imaging properties are tested, and the results demonstrate the uniformity and smooth surfaces of the MLA. We also demonstrated that the shape and alignment of the arrays could be controlled by different parameters.

  19. Multisensor monitoring of sea surface state of the coastal zone

    NASA Astrophysics Data System (ADS)

    Lavrova, Olga; Mityagina, Marina; Bocharova, Tatina

    Results of many-year monitoring of the state of coastal zone based on a multisensor approach are presented. The monitoring is aimed at solving the following tasks: operational mapping of parameters characterizing the state and pollution (coastal, ship and biogenic) of water; analysis of meteorological state and its effect on the drift and spread of pollutants; study of coastal circulation patterns and their impact on the drift and spread of pollutants; deriving typical pollution distribution patterns in the coastal zone.Processing and analysis is performed using data in visual, infrared and microwave ranges from ERS-2 SAR, Envisat ASAR/MERIS, Terra and Aqua MODIS and NOAA AVHRR instruments. These are complimented with ground data from meteorological stations on the shore and results of satellite data processing of previous periods. The main regions of interest are the Russian sectors of the Black and Azov Seas, southeastern part of the Baltic Sea, and northern and central regions of the Caspian Sea. Adjacent coasts are extremely populated and have well-developed industry, agriculture and rapidly growing tourist sectors. The necessity of constant monitoring of the sea state there is obvious.The monitoring activities allow us to accumulate extensive material for the study of hydrodynamic processes in the regions, in particular water circulation. Detailing the occurrence, evolution and drift of smalland meso-scale vortex structures is crucial for the knowledge of the mechanisms determining mixing and circulation processes in the coastal zone. These mechanisms play an important role in ecological, hydrodynamic and meteorological status of a coastal zone. Special attention is paid to the sea surface state in the Kerch Strait, where a tanker catastrophe took place on November 11, 2007 causing a spillage of over 1.5 thousand tons of heavy oil. The Kerch Strait is characterized by a complex current system with current directions changing to their opposites depending on

  20. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    NASA Astrophysics Data System (ADS)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  1. Recognition Time for Letters and Nonletters: Effects of Serial Position, Array Size, and Processing Order.

    ERIC Educational Resources Information Center

    Mason, Mildred

    1982-01-01

    Three experiments report additional evidence that it is a mistake to account for all interletter effects solely in terms of sensory variables. These experiments attest to the importance of structural variables such as retina location, array size, and ordinal position. (Author/PN)

  2. Overseas testing of a multisensor landmine detection system: results and lessons learned

    NASA Astrophysics Data System (ADS)

    Keranen, Joe G.; Topolosky, Zeke

    2009-05-01

    The Nemesis detection system has been developed to provide an efficient and reliable unmanned, multi-sensor, groundbased platform to detect and mark landmines. The detection system consists of two detection sensor arrays: a Ground Penetrating Synthetic Aperture Radar (GPSAR) developed by Planning Systems, Inc. (PSI) and an electromagnetic induction (EMI) sensor array developed by Minelab Electronics, PTY. Limited. Under direction of the Night Vision and Electronic Sensors Directorate (NVESD), overseas testing was performed at Kampong Chhnang Test Center (KCTC), Cambodia, from May 12-30, 2008. Test objectives included: evaluation of detection performance, demonstration of real-time visualization and alarm generation, and evaluation of system operational efficiency. Testing was performed on five sensor test lanes, each consisting of a unique soil mixture and three off-road lanes which include curves, overgrowth, potholes, and non-uniform lane geometry. In this paper, we outline the test objectives, procedures, results, and lessons learned from overseas testing. We also describe the current state of the system, and plans for future enhancements and modifications including clutter rejection and feature-level fusion.

  3. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  4. Dynamic templating: a large area processing route for the assembly of periodic arrays of sub-micrometer and nanoscale structures

    NASA Astrophysics Data System (ADS)

    Farzinpour, Pouyan; Sundar, Aarthi; Gilroy, Kyle D.; Eskin, Zachary E.; Hughes, Robert A.; Neretina, Svetlana

    2013-02-01

    A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process proceeds. The dynamic template consists of a pedestal of a sacrificial material, typically antimony, upon which an ultrathin layer of a second material is deposited. When heated, antimony sublimation results in a continuous reduction in template size where the motion of the sublimation fronts direct the diffusion of atoms of the second material to a predetermined location. The route has broad applicability, having already produced periodic arrays of gold, silver, copper, platinum, nickel, cobalt, germanium and Au-Ag alloys on substrates as diverse as silicon, sapphire, silicon-carbide, graphene and glass. Requiring only modest levels of instrumentation, the process provides an enabling route for any reasonably equipped researcher to fabricate periodic arrays that would otherwise require advanced fabrication facilities.A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process

  5. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  6. Robust optical and SAR multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    Wu, Yingdan; Ming, Yang

    2015-10-01

    This paper proposes a robust matching method for the multi-sensor imagery. Firstly, the SIFT feature matching and relaxation matching method are integrated in the highest pyramid to derive the approximate relationship between the reference and slave image. Then, the normalized Mutual Information and multi-grid multi-level RANSAC algorithm are adopted to find the correct conjugate points. Iteratively perform above steps until the original image level, the facet- based transformation model is used to carry out the image registration. Experiments have been made, and the results show that the method in this paper can deliver large number of evenly distributed conjugate points and realize the accurate registration of optical and SAR multi-sensor imagery.

  7. Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    PubMed Central

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

  8. Emerging standards for testing of multisensor mine detectors

    NASA Astrophysics Data System (ADS)

    Dibsdall, Ian M.

    2005-06-01

    The standards relating to testing of metal detectors for demining operations are developing well, including (but not limited to) CEN Working Agreement CWA14747:2003, UNMAS Mine Action Standards and others. However, for developing multisensor mine detectors there is no agreed standard method of testing. ITEP, the International Test and Evaluation Program for Humanitarian Demining, is currently drawing together several nation's experience of testing multisensor mine detectors into a "best practice" document that could be used as the basis for standardised testing. This paper outlines the test methodology used during recent multisensor mine detector tests and discusses the issues that have arisen and lessons learned. In particular, the requirements for suitable targets, careful site preparation, measurement of appropriate environmental factors and methods of maximising useful results with limited resources are highlighted. Most recent tests have used a combined Metal Detector (MD) and Ground Penetrating Radar (GPR), but other sensor systems will be considered. An emerging test methodology is presented, along with an invitation for feedback from other researchers for inclusion into the "best practice" document.

  9. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  10. Multi-sensor management for data fusion in target tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiaokun; Chen, Genshe; Blasch, Erik; Patrick, Jim; Yang, Chun; Kadar, Ivan

    2009-05-01

    Multi-sensor management for data fusion in target tracking concerns issues of sensor assignment and scheduling by managing or coordinating the use of multiple sensor resources. Since a centralized sensor management technique has a crucial limitation in that the failure of the central node would cause whole system failure, a decentralized sensor management (DSM) scheme is increasingly important in modern multi-sensor systems. DSM is afforded in modern systems through increased bandwidth, wireless communication, and enhanced power. However, protocols for system control are needed to management device access. As game theory offers learning models for distributed allocations of surveillance resources and provides mechanisms to handle the uncertainty of surveillance area, we propose an agent-based negotiable game theoretic approach for decentralized sensor management (ANGADS). With the decentralized sensor management scheme, sensor assignment occurs locally, and there is no central node and thus reduces the risk of whole-system failure. Simulation results for a multi-sensor target-tracking scenario demonstrate the applicability of the proposed approach.

  11. A multi-sensor scenario for coastal surveillance

    NASA Astrophysics Data System (ADS)

    van den Broek, A. C.; van den Broek, S. P.; van den Heuvel, J. C.; Schwering, P. B. W.; van Heijningen, A. W. P.

    2007-10-01

    Maritime borders and coastal zones are susceptible to threats such as drug trafficking, piracy, undermining economical activities. At TNO Defence, Security and Safety various studies aim at improving situational awareness in a coastal zone. In this study we focus on multi-sensor surveillance of the coastal environment. We present a study on improving classification results for small sea surface targets using an advanced sensor suite and a scenario in which a small boat is approaching the coast. A next generation sensor suite mounted on a tower has been defined consisting of a maritime surveillance and tracking radar system, capable of producing range profiles and ISAR imagery of ships, an advanced infrared camera and a laser range profiler. For this suite we have developed a multi-sensor classification procedure, which is used to evaluate the capabilities for recognizing and identifying non-cooperative ships in coastal waters. We have found that the different sensors give complementary information. Each sensor has its own specific distance range in which it contributes most. A multi-sensor approach reduces the number of misclassifications and reliable classification results are obtained earlier compared to a single sensor approach.

  12. Multi-sensor Mapping of Volcanic Plumes and Clouds

    NASA Astrophysics Data System (ADS)

    Realmuto, V. J.

    2006-12-01

    The instruments aboard the NASA series of Earth Observing System satellites provide a rich suite of measurements for the mapping of volcanic plumes and clouds. In this presentation we focus on analyses of data acquired with the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Atmospheric Infrared Sounder (AIRS), Moderate-Resolution Imaging Spectrometer (MODIS), and Multiangle Imaging SpectroRadiometer (MISR). ASTER, MODIS, AIRS, and MISR provide complimentary information on the quantity and distribution of sulfur dioxide, silicate ash, and sulfate aerosols within plumes. In addition, MISR data are used to derive estimates of cloud-top altitude, wind direction, and wind speed. The key to multi-sensor mapping is the availability of a standard set of tools for the processing of data from different instruments. To date we have used the MAP_SO2 toolkit to analyze the thermal infrared (TIR) data from MODIS, ASTER, and AIRS. MAP_SO2, a graphic user interface to the MODTRAN radiative transfer model, provides tools for the estimation of emissivity spectra, water vapor and ozone correction factors, surface temperature, and concentrations of SO2. We use the MISR_Shift toolkit to estimate plume-top altitudes and local wind vectors. Our continuous refinement of MAP_SO2 has resulted in lower detection limits for SO2 and lower sensitivity to the presence of sulfate aerosols and ash. Our plans for future refinements of MAP_SO2 include the incorporation of AIRS-based profiles of atmospheric temperature, water vapor and ozone, and MISR-based maps of plume-top altitude into the plume mapping procedures. The centerpiece of our study is a time-series of data acquired during the 2002-2003 and 2006 eruptions of Mount Etna. Time-series measurements are the only means of recording dynamic phenomena and characterizing the processes that generate such phenomena. We have also analyzed data acquired over Klychevskoy, Bezymianny, and Sheveluch (Kamchatka), Augustine

  13. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    NASA Astrophysics Data System (ADS)

    Due-Hansen, J.; Midtbø, K.; Poppe, E.; Summanwar, A.; Jensen, G. U.; Breivik, L.; Wang, D. T.; Schjølberg-Henriksen, K.

    2012-07-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging.

  14. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    SciTech Connect

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-06-15

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns.

  15. Dynamic templating: a large area processing route for the assembly of periodic arrays of sub-micrometer and nanoscale structures.

    PubMed

    Farzinpour, Pouyan; Sundar, Aarthi; Gilroy, Kyle D; Eskin, Zachary E; Hughes, Robert A; Neretina, Svetlana

    2013-03-01

    A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process proceeds. The dynamic template consists of a pedestal of a sacrificial material, typically antimony, upon which an ultrathin layer of a second material is deposited. When heated, antimony sublimation results in a continuous reduction in template size where the motion of the sublimation fronts direct the diffusion of atoms of the second material to a predetermined location. The route has broad applicability, having already produced periodic arrays of gold, silver, copper, platinum, nickel, cobalt, germanium and Au-Ag alloys on substrates as diverse as silicon, sapphire, silicon-carbide, graphene and glass. Requiring only modest levels of instrumentation, the process provides an enabling route for any reasonably equipped researcher to fabricate periodic arrays that would otherwise require advanced fabrication facilities. PMID:23354129

  16. CMOS Geiger photodiode array with integrated signal processing for imaging of 2D objects using quantum dots

    NASA Astrophysics Data System (ADS)

    Stapels, Christopher J.; Lawrence, William G.; Gurjar, Rajan S.; Johnson, Erik B.; Christian, James F.

    2008-08-01

    Geiger-mode photodiodes (GPD) act as binary photon detectors that convert analog light intensity into digital pulses. Fabrication of arrays of GPD in a CMOS environment simplifies the integration of signal-processing electronics to enhance the performance and provide a low-cost detector-on-a-chip platform. Such an instrument facilitates imaging applications with extremely low light and confined volumes. High sensitivity reading of small samples enables twodimensional imaging of DNA arrays and for tracking single molecules, and observing their dynamic behavior. In this work, we describe the performance of a prototype imaging detector of GPD pixels, with integrated active quenching for use in imaging of 2D objects using fluorescent labels. We demonstrate the integration of on-chip memory and a parallel readout interface for an array of CMOS GPD pixels as progress toward an all-digital detector on a chip. We also describe advances in pixel-level signal processing and solid-state photomultiplier developments.

  17. Simultaneous processing of photographic and accelerator array data from sled impact experiment

    NASA Astrophysics Data System (ADS)

    Ash, M. E.

    1982-12-01

    A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.

  18. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka. PMID:24385138

  19. Solution-Processed Large-Area Nanocrystal Arrays of Metal-Organic Frameworks as Wearable, Ultrasensitive, Electronic Skin for Health Monitoring.

    PubMed

    Fu, Xiaolong; Dong, Huanli; Zhen, Yonggang; Hu, Wenping

    2015-07-15

    Pressure sensors based on solution-processed metal-organic frameworks nanowire arrays are fabricated with very low cost, flexibility, high sensitivity, and ease of integration into sensor arrays. Furthermore, the pressure sensors are suitable for monitoring and diagnosing biomedical signals such as radial artery pressure waveforms in real time. PMID:25760306

  20. Novel human-robot interface integrating real-time visual tracking and microphone-array signal processing

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi

    1998-10-01

    This paper proposes a novel human robot interface that is an integration of real time visual tracking and microphone array signal processing. The proposed interface is intended to be used as a speech signal input method for human collaborative robot. Utilizing it, the robot can clearly listen human master's voice remotely as if a wireless microphone were put just in front of the master. A novel technique to form `acoustic focus' at human face is developed. To track and locate the face dynamically, real time face tracking and stereo vision are utilized. To make the acoustic focus at the face, microphones array is utilized. Setting gain and delay of each microphone properly enables to form acoustic focus at desired location. The gain and delay are determined based upon the location of the face. Results of preliminary experiments and simulations demonstrate feasibility of the proposed idea.

  1. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  2. Array tomography: imaging stained arrays.

    PubMed

    Micheva, Kristina D; O'Rourke, Nancy; Busse, Brad; Smith, Stephen J

    2010-11-01

    Array tomography is a volumetric microscopy method based on physical serial sectioning. Ultrathin sections of a plastic-embedded tissue are cut using an ultramicrotome, bonded in an ordered array to a glass coverslip, stained as desired, and imaged. The resulting two-dimensional image tiles can then be reconstructed computationally into three-dimensional volume images for visualization and quantitative analysis. The minimal thickness of individual sections permits high-quality rapid staining and imaging, whereas the array format allows reliable and convenient section handling, staining, and automated imaging. Also, the physical stability of the arrays permits images to be acquired and registered from repeated cycles of staining, imaging, and stain elution, as well as from imaging using multiple modalities (e.g., fluorescence and electron microscopy). Array tomography makes it possible to visualize and quantify previously inaccessible features of tissue structure and molecular architecture. However, careful preparation of the tissue is essential for successful array tomography; these steps can be time-consuming and require some practice to perfect. In this protocol, tissue arrays are imaged using conventional wide-field fluorescence microscopy. Images can be captured manually or, with the appropriate software and hardware, the process can be automated. PMID:21041399

  3. A data fusion algorithm for multi-sensor microburst hazard assessment

    NASA Technical Reports Server (NTRS)

    Wanke, Craig R.; Hansman, R. John

    1994-01-01

    A recursive model-based data fusion algorithm for multi-sensor microburst hazard assessment is described. An analytical microburst model is used to approximate the actual windfield, and a set of 'best' model parameters are estimated from measured winds. The winds corresponding to the best parameter set can then be used to compute alerting factors such as microburst position, extent, and intensity. The estimation algorithm is based on an iterated extended Kalman filter which uses the microburst model parameters as state variables. Microburst state dynamic and process noise parameters are chosen based on measured microburst statistics. The estimation method is applied to data from a time-varying computational simulation of a historical microburst event to demonstrate its capabilities and limitations. Selection of filter parameters and initial conditions is discussed. Computational requirements and datalink bandwidth considerations are also addressed.

  4. Light absorption processes and optimization of ZnO/CdTe core-shell nanowire arrays for nanostructured solar cells

    NASA Astrophysics Data System (ADS)

    Michallon, Jérôme; Bucci, Davide; Morand, Alain; Zanuccoli, Mauro; Consonni, Vincent; Kaminski-Cachopo, Anne

    2015-02-01

    The absorption processes of extremely thin absorber solar cells based on ZnO/CdTe core-shell nanowire (NW) arrays with square, hexagonal or triangular arrangements are investigated through systematic computations of the ideal short-circuit current density using three-dimensional rigorous coupled wave analysis. The geometrical dimensions are optimized for optically designing these solar cells: the optimal NW diameter, height and array period are of 200 ± 10 nm, 1-3 μm and 350-400 nm for the square arrangement with CdTe shell thickness of 40-60 nm. The effects of the CdTe shell thickness on the absorption of ZnO/CdTe NW arrays are revealed through the study of two optical key modes: the first one is confining the light into individual NWs, the second one is strongly interacting with the NW arrangement. It is also shown that the reflectivity of the substrate can improve Fabry-Perot resonances within the NWs: the ideal short-circuit current density is increased by 10% for the ZnO/fluorine-doped tin oxide (FTO)/ideal reflector as compared to the ZnO/FTO/glass substrate. Furthermore, the optimized square arrangement absorbs light more efficiently than both optimized hexagonal and triangular arrangements. Eventually, the enhancement factor of the ideal short-circuit current density is calculated as high as 1.72 with respect to planar layers, showing the high optical potentiality of ZnO/CdTe core-shell NW arrays.

  5. Results of an analysis of pre-collapse NTS seismic data using split array cross-correlator processing

    SciTech Connect

    Doll, W.E. . Dept. of Geology)

    1990-07-31

    In this study, the authors applied the split array cross-correlation method to a set of pre-collapse data from the Nevada Test Site. The motivation for the study came from preliminary tests of the method on data from an Imperial Valley flow test, in which the location of the event fell close to the location of the injection well, implying that the method might be effective for noisy or emergent signal detection. This study, using NTS data, is the first detailed analysis of the SACC technique for location of seismic events. This study demonstrates that cross-correlation must be used very carefully, if it can be used at all, for locating primary seismic events. Radiation patterns and local structure which cause significant variations in the waveform can make cross-correlation techniques unreliable. Further study is required to determine whether such methods can be used effectively on enveloped traces. At a minimum, a large array or a set of dense arrays would be needed to locate events. When it is reasonable to assume similar waveforms at all stations in an array, the evidence in this report indicates that the SACC method is robust over a wide range of values of the control parameters. Because it provides an estimate of the likelihood for each point in a grid, the SACC method would be useful in noisy data where the approximate location of the epicenter is known. The images formed by SACC processing could be treated as a type of probability contour map for such data. 5 refs., 12 figs.

  6. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab

    NASA Technical Reports Server (NTRS)

    Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.

    2008-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  7. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.

    2007-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  8. Application Of The Time-Frequency Polarization Analysis Of The Wavefield For Seismic Noise Array Processing

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Rosa-Cintas, S.; Rosa-Herranz, J. L.; Molina-Palacios, S.; Martinez-Espla, J. J.

    2011-12-01

    Microzonation studies using ambient noise measurements constitute an extended and useful procedure for determine the local soil characteristics and its response due to an earthquake. Several methods exist for analyzing the noise measurements, being the most popular the horizontal-to-vertical spectral ratio (H/V) and the array techniques, i.e. the frequency-wavenumber (F-K) transform. Many works exist about this topic and it stills being an ongoing debate about ambient noise composition, whether body or surface waves constitute most of it, showing the importance of identifying the different kinds of waves presents in a seismic record. In this work we utilize a new method of time-frequency polarization analysis, based on the stationary wavelet packet transform, to investigate how the polarization characteristics of the wavefield influence in the application of ambient noise techniques. The signals are divided in different bands, according to their reciprocal ellipticity values and then the H/V method and the F-K array analysis are computed for each band. The qualitative and quantitative comparison between the original curve and the obtained for the analyzed intervals provide information about the signals composition, showing that the major components of the seismic noise present reciprocal ellipticity values lower than 0.5. The efficient application of the studied techniques by using just the main a part of the entire signal, [0 - 0.5], is also evaluated, showing favorable results.

  9. A new process for fabricating nanodot arrays on selective regions with diblock copolymer thin film

    NASA Astrophysics Data System (ADS)

    Park, Dae-Ho

    2007-09-01

    A procedure for micropatterning a single layer of nanodot arrays in selective regions is demonstrated by using thin films of polystyrene-b-poly(t-butyl acrylate) (PS-b-PtBA) diblock copolymer. The thin-film self-assembled into hexagonally arranged PtBA nanodomains in a PS matrix on a substrate by solvent annealing with 1,4-dioxane. The PtBA nanodomains were converted into poly(acrylic acid) (PAA) having carboxylic-acid-functionalized nanodomains by exposure to hydrochloric acid vapor, or were removed by ultraviolet (UV) irradiation to generate vacant sites without any functional groups due to the elimination of PtBA domains. By sequential treatment with aqueous sodium bicarbonate and aqueous zinc acetate solution, zinc cations were selectively loaded only on the carboxylic-acid-functionalized nanodomains prepared via hydrolysis. Macroscopic patterning through a photomask via UV irradiation, hydrolysis, sequential zinc cation loading and calcination left a nanodot array of zinc oxide on a selectively UV-shaded region.

  10. NASA 1990 Multisensor Airborne Campaigns (MACs) for ecosystem and watershed studies

    NASA Technical Reports Server (NTRS)

    Wickland, Diane E.; Asrar, Ghassem; Murphy, Robert E.

    1991-01-01

    The Multisensor Airborne Campaign (MAC) focus within NASA's former Land Processes research program was conceived to achieve the following objectives: to acquire relatively complete, multisensor data sets for well-studied field sites, to add a strong remote sensing science component to ecology-, hydrology-, and geology-oriented field projects, to create a research environment that promotes strong interactions among scientists within the program, and to more efficiently utilize and compete for the NASA fleet of remote sensing aircraft. Four new MAC's were conducted in 1990: the Oregon Transect Ecosystem Research (OTTER) project along an east-west transect through central Oregon, the Forest Ecosystem Dynamics (FED) project at the Northern Experimental Forest in Howland, Maine, the MACHYDRO project in the Mahantango Creek watershed in central Pennsylvania, and the Walnut Gulch project near Tombstone, Arizona. The OTTER project is testing a model that estimates the major fluxes of carbon, nitrogen, and water through temperate coniferous forest ecosystems. The focus in the project is on short time-scale (days-year) variations in ecosystem function. The FED project is concerned with modeling vegetation changes of forest ecosystems using remotely sensed observations to extract biophysical properties of forest canopies. The focus in this project is on long time-scale (decades to millenia) changes in ecosystem structure. The MACHYDRO project is studying the role of soil moisture and its regulating effects on hydrologic processes. The focus of the study is to delineate soil moisture differences within a basin and their changes with respect to evapotranspiration, rainfall, and streamflow. The Walnut Gulch project is focused on the effects of soil moisture in the energy and water balance of arid and semiarid ecosystems and their feedbacks to the atmosphere via thermal forcing.

  11. Phonon processes in vertically aligned silicon nanowire arrays produced by low-cost all-solution galvanic displacement method

    NASA Astrophysics Data System (ADS)

    Banerjee, Debika; Trudeau, Charles; Gerlein, Luis Felipe; Cloutier, Sylvain G.

    2016-03-01

    The nanoscale engineering of silicon can significantly change its bulk optoelectronic properties to make it more favorable for device integration. Phonon process engineering is one way to enhance inter-band transitions in silicon's indirect band structure alignment. This paper demonstrates phonon localization at the tip of silicon nanowires fabricated by galvanic displacement using wet electroless chemical etching of a bulk silicon wafer. High-resolution Raman micro-spectroscopy reveals that such arrayed structures of silicon nanowires display phonon localization behaviors, which could help their integration into the future generations of nano-engineered silicon nanowire-based devices such as photodetectors and solar cells.

  12. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to

  13. Falling Person Detection Using Multi-Sensor Signal Processing

    NASA Astrophysics Data System (ADS)

    Toreyin, B. Ugur; Soyer, A. Birey; Onaran, Ibrahim; Cetin, E. Enis

    2007-12-01

    Falls are one of the most important problems for frail and elderly people living independently. Early detection of falls is vital to provide a safe and active lifestyle for elderly. Sound, passive infrared (PIR) and vibration sensors can be placed in a supportive home environment to provide information about daily activities of an elderly person. In this paper, signals produced by sound, PIR and vibration sensors are simultaneously analyzed to detect falls. Hidden Markov Models are trained for regular and unusual activities of an elderly person and a pet for each sensor signal. Decisions of HMMs are fused together to reach a final decision.

  14. Multisensor airborne imagery collection and processing onboard small unmanned systems

    NASA Astrophysics Data System (ADS)

    Linne von Berg, Dale; Anderson, Scott A.; Bird, Alan; Holt, Niel; Kruer, Melvin; Walls, Thomas J.; Wilson, Michael L.

    2010-04-01

    FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) is an ONR funded effort to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). This program is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL). FEATHAR has developed and integrated EyePod, a combined long-wave infrared (LWIR) and visible to near infrared (VNIR) optical survey & inspection system, with NuSAR, a combined dual band synthetic aperture radar (SAR) system. These sensors are being tested in conjunction with other ground and airborne sensor systems to demonstrate intelligent real-time cross-sensor cueing and in-air data fusion. Results from test flights of the EyePod and NuSAR sensors will be presented.

  15. Primary Dendrite Array: Observations from Ground-Based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Grugel, Richard N.; Erdman, Robert G.; Poirier, David R.

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST). Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K per centimeter (MICAST6) and 28 K per centimeter (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 millimeters per second) and a speed decrease (MICAST7-from 20 to 10 millimeters per second). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  16. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  17. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    SciTech Connect

    Wlodarczyk, Krystian L. Maier, Robert R. J.; Hand, Duncan P.; Bryce, Emma; Hutson, David; Kirk, Katherine; Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Strachan, Mel

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  18. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  19. From spin noise to systematics: stochastic processes in the first International Pulsar Timing Array data release

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.

    2016-05-01

    We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.

  20. Satellite Data Simulator Unit: A Multisensor, Multispectral Satellite Simulator Package

    NASA Technical Reports Server (NTRS)

    Masunaga, Hirohiko; Matsui, Toshihisa; Tao, Wei-Kuo; Hou, Arthur Y.; Kummerow, Christian D.; Nakajima, Teruyuki; Bauer, Peter; Olson, William S.; Sekiguchi, Miho; Nakajima, Teruyuki

    2010-01-01

    Several multisensor simulator packages are being developed by different research groups across the world. Such simulator packages [e.g., COSP , CRTM, ECSIM, RTTO, ISSARS (under development), and SDSU (this article), among others] share overall aims, although some are targeted more on particular satellite programs or specific applications (for research purposes or for operational use) than others. The SDSU or Satellite Data Simulator Unit is a general-purpose simulator composed of Fortran 90 codes and applicable to spaceborne microwave radiometer, radar, and visible/infrared imagers including, but not limited to, the sensors listed in a table. That shows satellite programs particularly suitable for multisensor data analysis: some are single satellite missions carrying two or more instruments, while others are constellations of satellites flying in formation. The TRMM and A-Train are ongoing satellite missions carrying diverse sensors that observe clouds and precipitation, and will be continued or augmented within the decade to come by future multisensor missions such as the GPM and Earth-CARE. The ultimate goals of these present and proposed satellite programs are not restricted to clouds and precipitation but are to better understand their interactions with atmospheric dynamics/chemistry and feedback to climate. The SDSU's applicability is not technically limited to hydrometeor measurements either, but may be extended to air temperature and humidity observations by tuning the SDSU to sounding channels. As such, the SDSU and other multisensor simulators would potentially contribute to a broad area of climate and atmospheric sciences. The SDSU is not optimized to any particular orbital geometry of satellites. The SDSU is applicable not only to low-Earth orbiting platforms as listed in Table 1, but also to geostationary meteorological satellites. Although no geosynchronous satellite carries microwave instruments at present or in the near future, the SDSU would be

  1. Fault detection and isolation for multisensor navigation systems

    NASA Technical Reports Server (NTRS)

    Kline, Paul A.; Vangraas, Frank

    1991-01-01

    Increasing attention is being given to the problem of erroneous measurement data for multisensor navigation systems. A recursive estimator can be used in conjunction with a 'snapshot' batch estimator to provide fault detection and isolation (FDI) for these systems. A recursive estimator uses past system states to form a new state estimate and compares it to the calculated state based on a new set of measurements. A 'snapshot' batch estimator uses a set of measurements collected simultaneously and compares solutions based on subsets of measurements. The 'snapshot' approach requires redundant measurements in order to detect and isolate faults. FDI is also referred to as Receiver Autonomous Integrity Monitoring (RAIM).

  2. A Bayesian Approach To Multi-Sensor Track Correlation

    NASA Astrophysics Data System (ADS)

    Horsley, M.

    2010-09-01

    One of the primary goals of Space Situational Awareness is to locate objects in space and characterize their orbital parameters. This is typically performed using a single sensor. The use of multiple sensors offers the potential to improve system performance over what could be achieved by the use of a single sensor through increased visibility, increased accuracy, etc… However, to realize these improvements in practice, the association of data collected by the different sensors has to be performed in a reliable manner, with a quantifiable confidence level reported for each data association. Furthermore, sources of error such as track uncertainty and sensor bias have to be taken into account in order for the derived confidence levels to be valid. This paper will describe an approach to compute probabilities of association to support the integration of data collected by multiple sensors on a group of objects. Multi-sensor data association is a fundamental problem in distributed multi-target multi-sensor tracking systems and involves finding the most probable association between object tracks. This is a challenging problem for a number of reasons. Each sensor may only observe a portion of the total number of objects, the object spacing may be small compared to a sensor’s reported track accuracy, and each sensor may be biased. In addition, the problem space grows exponentially with the number of objects and sensors, making direct enumeration of the possible associations impractical for even modestly sized problems. In this paper, the multi-sensor, multi-target likelihood function will be defined, with sensor bias included in the likelihood function. Sensor bias priors will be introduced and used to marginalize out the sensor bias. This marginalized likelihood will be incorporated into a Markov chain Monte Carlo data association framework and used to compute probabilities of association. In addition, the number of objects is treated as an unknown and probability

  3. Multi-sensor QPE in the National Mosaic and QPE (NMQ) System

    NASA Astrophysics Data System (ADS)

    Howard, K.; Zhang, J.

    2007-05-01

    Accurate quantitative precipitation estimation (QPE) and forecast are critical for flood and flash flood warnings and for water resource managements. Significant advancements in recent years in computational resources, networking, and remote sensing technologies have provided great opportunities to develop more accurate QPE than what were there before. Given the complex spatial and temporal characteristics of precipitation processes, not one single observing system can provide complete and accurate measurements of surface precipitation for the wide spectrum of hydrological applications. For instance, the rain gauges make direct measurements of the surface precipitation, but the gauge stations are often sparsely distributed and the measurements are subject to errors due to temporary blockage of the collecting orifice by frozen hydrometeors, wind blowing-off effects on the tipping buckets, telemetry errors, etc. Weather radars make semi-direct measurements of the precipitation but there are uncertainties associated with reflectivity (Z) - rain rate (R) conversion in addition to errors associated with beam blockage and non-uniform vertical profile of reflectivity. Satellite data have no obstructions and provide best coverage of precipitation systems among all observational networks, yet they only observe cloud tops and provide indirect measurements of precipitation. The multi-sensor QPE in the NMQ system makes use of advantages of each observing systems and produces integrated precipitation products. The radar data in the optimal sampling area are used to dynamically calibrate the satellite infrared field and to produce a satellite QPE product. The satellite QPE is combined with radar-based QPE to fill in regions with poor radar coverage and a blended radar-satellite QPE is generated. The rain gauge data are then used to adjust the magnitude of the radar-satellite blended QPE field and to remove bias. This paper presents an overview of the NMQ multi-sensor QPE schemes

  4. ARTMAP neural network for land cover classification and multisensor fusion in remote sensing

    NASA Astrophysics Data System (ADS)

    Liu, Weiguo

    Land cover maps are one of the primary kinds of geospatial information provided by remote sensing. Land cover classification is essential for terrestrial ecosystem modeling and monitoring, as well as climate modeling and prediction. During the last decade, artificial neural network (ANN) classifiers have been used increasingly in land cover classification and detection of land cover change using remote sensing data. However, understanding of the behavior and characteristics of ANNs lags behind that of conventional techniques such as the maximum likelihood classifier. The focus of this dissertation is improved use of ANNs, particularly ARTMAP, for land cover mapping using remote sensing. Four topics are pursued. First, an ARTMAP classifier increases the accessibility of ARTMAP to users in remote sensing, and a new set of visualization tools aids interpretation of the internal dynamics of ARTMAP. Second, a hybrid classification approach uses two nonparametric classifiers, ARTMAP and decision trees, to produce a spatially explicit uncertainty metric which can be provided with thematic maps. Validation of this approach using two data sets demonstrates that classification accuracy is strongly related to confidence levels. Third, a new pruning technique combines prediction accuracy and instance counting. This pruning algorithm leads to 2 to 5% improvement in classification accuracy in tests using three Landsat TM data sets. The pruning technique also reduces the category proliferation problem in ARTMAP. Fourth, a new ARTMAP model for multisensor image fusion estimates subpixel land cover proportions from coarser resolution images (MODIS) based on training from finer resolution images (Landsat TM). This approach builds multiscale representations of land cover such that diverse processes can be examined at appropriate spatial scales. ARTMAP consistently performs better than conventional linear mixture models for estimating subpixel fractions. The overall benefits of the

  5. Source Depth Estimation Using a Horizontal Array by Matched-Mode Processing in the Frequency-Wavenumber Domain

    NASA Astrophysics Data System (ADS)

    Nicolas, Barbara; Mars, Jérôme I.; Lacoume, Jean-Louis

    2006-12-01

    In shallow water environments, matched-field processing (MFP) and matched-mode processing (MMP) are proven techniques for doing source localization. In these environments, the acoustic field propagates at long range as depth-dependent modes. Given a knowledge of the modes, it is possible to estimate source depth. In MMP, the pressure field is typically sampled over depth with a vertical line array (VLA) in order to extract the mode amplitudes. In this paper, we focus on horizontal line arrays (HLA) as they are generally more practical for at sea applications. Considering an impulsive low-frequency source (1-100 Hz) in a shallow water environment (100-400 m), we propose an efficient method to estimate source depth by modal decomposition of the pressure field recorded on an HLA of sensors. Mode amplitudes are estimated using the frequency-wavenumber transform, which is the 2D Fourier transform of a time-distance section. We first study the robustness of the presented method against noise and against environmental mismatches on simulated data. Then, the method is applied both to at sea and laboratory data. We also show that the source depth estimation is drastically improved by incorporating the sign of the mode amplitudes.

  6. VLSI processor with a configurable processing element array for balanced feature extraction in high-resolution images

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbo; Shibata, Tadashi

    2014-01-01

    A VLSI processor employing a configurable processing element array (PEA) is developed for a newly proposed balanced feature extraction algorithm. In the algorithm, the input image is divided into square regions and the number of features is determined by noise effect analysis in each region. Regions of different sizes are used according to the resolutions and contents of input images. Therefore, inside the PEA, processing elements are hierarchically grouped for feature extraction in regions of different sizes. A proof-of-concept chip is fabricated using a 0.18 µm CMOS technology with a 32 × 32 PEA. From measurement results, a speed of 7.5 kfps is achieved for feature extraction in 128 × 128 pixel regions when operating the chip at 45 MHz, and a speed of 55 fps is also achieved for feature extraction in 1920 × 1080 pixel images.

  7. Apparatus for measuring local stress of metallic films, using an array of parallel laser beams during rapid thermal processing

    NASA Astrophysics Data System (ADS)

    Huang, R.; Taylor, C. A.; Himmelsbach, S.; Ceric, H.; Detzel, T.

    2010-05-01

    The novel apparatus described here was developed to investigate the thermo-mechanical behavior of metallic films on a substrate by acquiring the wafer curvature. It comprises an optical module producing and measuring an array of parallel laser beams, a high resolution scanning stage, a rapid thermal processing (RTP) chamber and several accessorial gas control modules. Unlike most traditional systems which only calculate the average wafer curvature, this system has the capability to measure the curvature locally in 30 ms. Consequently, the real-time development of biaxial stress involved in thin films can be fully captured during any thermal treatments such as temperature cycling or annealing processes. In addition, the multiple parallel laser beam technique cancels electrical, vibrational and other random noise sources that would otherwise make an in situ measurement very difficult. Furthermore, other advanced features such as the in situ acid treatment and active cooling extend the experimental conditions to provide new insights into thin film properties and material behavior.

  8. Modeling change from large-scale high-dimensional spatio-temporal array data

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Pebesma, Edzer

    2014-05-01

    The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?

  9. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  10. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  11. Matched Bearing Processing for Airborne Source Localization by an Underwater Horizontal Line Array

    NASA Astrophysics Data System (ADS)

    Peng, Zhao-Hui; Li, Zheng-Lin; Wang, Guang-Xu

    2010-11-01

    Location of an airborne source is estimated from signals measured by a horizontal line array (HLA), based on the fact that a signal transmitted by an airborne source will reach a underwater hydrophone in different ways: via a direct refracted path, via one or more bottom and surface reflections, via the so-called lateral wave. As a result, when an HLA near the airborne source is used for beamforming, several peaks at different bearing angles will appear. By matching the experimental beamforming outputs with the predicted outputs for all source locations, the most likely location is the one which gives minimum difference. An experiment is conducted for airborne source localization in the Yellow Sea in October 2008. An HLA was laid on the sea bottom at the depth of 30m. A high-power loudspeaker was hung on a research ship floating near the HLA and sent out LFM pulses. The estimated location of the loudspeaker is in agreement well with the GPS measurements.

  12. Elastomeric inverse moulding and vacuum casting process characterization for the fabrication of arrays of concave refractive microlenses

    NASA Astrophysics Data System (ADS)

    Desmet, L.; Van Overmeire, S.; Van Erps, J.; Ottevaere, H.; Debaes, C.; Thienpont, H.

    2007-01-01

    We present a complete and precise quantitative characterization of the different process steps used in an elastomeric inverse moulding and vacuum casting technique. We use the latter replication technique to fabricate concave replicas from an array of convex thermal reflow microlenses. During the inverse elastomeric moulding we obtain a secondary silicone mould of the original silicone mould in which the master component is embedded. Using vacuum casting, we are then able to cast out of the second mould several optical transparent poly-urethane arrays of concave refractive microlenses. We select ten particular representative microlenses on the original, the silicone moulds and replica sample and quantitatively characterize and statistically compare them during the various fabrication steps. For this purpose, we use several state-of-the-art and ultra-precise characterization tools such as a stereo microscope, a stylus surface profilometer, a non-contact optical profilometer, a Mach-Zehnder interferometer, a Twyman-Green interferometer and an atomic force microscope to compare various microlens parameters such as the lens height, the diameter, the paraxial focal length, the radius of curvature, the Strehl ratio, the peak-to-valley and the root-mean-square wave aberrations and the surface roughness. When appropriate, the microlens parameter under test is measured with several different measuring tools to check for consistency in the measurement data. Although none of the lens samples shows diffraction-limited performance, we prove that the obtained replicated arrays of concave microlenses exhibit sufficiently low surface roughness and sufficiently high lens quality for various imaging applications.

  13. Investigation of proposed process sequence for the array automated assembly task, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-01-01

    Progress was made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The processes that were retained for the selected process sequence, spin-on diffusion, print and fire aluminum p+ back, clean, print and fire silver front contact and apply tin pad to aluminum back, were evaluated for their cost contribution.

  14. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  15. Case-Based Multi-Sensor Intrusion Detection

    NASA Astrophysics Data System (ADS)

    Schwartz, Daniel G.; Long, Jidong

    2009-08-01

    Multi-sensor intrusion detection systems (IDSs) combine the alerts raised by individual IDSs and possibly other kinds of devices such as firewalls and antivirus software. A critical issue in building a multi-sensor IDS is alert-correlation, i.e., determining which alerts are caused by the same attack. This paper explores a novel approach to alert correlation using case-based reasoning (CBR). Each case in the CBR system's library contains a pattern of alerts raised by some known attack type, together with the identity of the attack. Then during run time, the alert streams gleaned from the sensors are compared with the patterns in the cases, and a match indicates that the attack described by that case has occurred. For this purpose the design of a fast and accurate matching algorithm is imperative. Two such algorithms were explored: (i) the well-known Hungarian algorithm, and (ii) an order-preserving matching of our own device. Tests were conducted using the DARPA Grand Challenge Problem attack simulator. These showed that the both matching algorithms are effective in detecting attacks; but the Hungarian algorithm is inefficient; whereas the order-preserving one is very efficient, in fact runs in linear time.

  16. Multisensor fusion using the sensor algorithm research expert system

    NASA Astrophysics Data System (ADS)

    Bullock, Michael E.; Miltonberger, Thomas W.; Reinholdsten, Paul A.; Wilson, Kathleen

    1991-08-01

    A method for object recognition using a multisensor model-based approach has been developed. The sensor algorithm research expert system (SARES) is a sun-based workstation for model-based object recognition algorithm development. SARES is a means to perform research into multiple levels of geometric and scattering models, image and signal feature extraction, hypothesis management, and matching strategies. SARES multisensor fusion allows for multiple geometric representations and decompositions, and sensor location transformations, as well as feature prediction, matching, and evidence accrual. It is shown that the fusion algorithm can exploit the synergistic information contained in IR and synthetic aperture radar (SAR) imagery yielding increased object recognition accuracy and confidence over single sensor exploitation alone. The fusion algorithm has the added benefit of reducing the number of computations by virtue of simplified object model combinatorics. That is, the additional sensor information eliminates a large number of the incorrect object hypotheses early in the algorithm. This provides a focus of attention to those object hypotheses which are closest to the correct hypothesis.

  17. An objective multi-sensor fusion metric for target detection

    NASA Astrophysics Data System (ADS)

    Sweetnich, S. R.; Fernandes, S. P.; Clark, J. D.; Sakla, W. A.

    2014-06-01

    Target detection is limited based on a specific sensors capability; however, the combination of multiple sensors will improve the confidence of target detection. Confidence of detection, tracking and identifying a target in a multi-sensor environment depends on intrinsic and extrinsic sensor qualities, e.g. target geo-location registration, and environmental conditions 1. Determination of the optimal sensors and classification algorithms, required to assist in specific target detection, has largely been accomplished with empirical experimentation. Formulation of a multi-sensor effectiveness metric (MuSEM) for sensor combinations is presented in this paper. Leveraging one or a combination of sensors should provide a higher confidence of target classification. This metric incorporates the Dempster-Shafer Theory for decision analysis. MuSEM is defined for weakly labeled multimodal data and is modeled and trained with empirical fused sensor detections; this metric is compared to Boolean algebra algorithms from decision fusion research. Multiple sensor specific classifiers are compared and fused to characterize sensor detection models and the likelihood functions of the models. For area under the curve (AUC), MuSEM attained values as high as .97 with an average difference of 5.33% between Boolean fusion rules. Data was collected from the Air Force Research Lab's Minor Area Motion Imagery (MAMI) project. This metric is efficient and effective, providing a confidence of target classification based on sensor combinations.

  18. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    NASA Astrophysics Data System (ADS)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  19. Global Arrays

    SciTech Connect

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the shared data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).

  20. Global Arrays

    Energy Science and Technology Software Center (ESTSC)

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the sharedmore » data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).« less

  1. Fabrication and evaluation of a microspring contact array using a reel-to-reel continuous fiber process

    NASA Astrophysics Data System (ADS)

    Khumpuang, S.; Ohtomo, A.; Miyake, K.; Itoh, T.

    2011-10-01

    In this work a novel patterning technique for fabrication of a conductive microspring array as an electrical contact structure directly on fiber substrate is introduced. Using low-temperature compression from the nanoimprinting technique to generate a gradient depth on the desired pattern, PEDOT: PSS film, the hair-like structures are released as bimorph microspring cantilevers. The microspring is in the form of a stress-engineered cantilever arranged in rows. The microspring contact array is employed in composing the electrical circuit through a large area of woven textile, and functions as the electrical contact between weft ribbon and warp ribbon. The spring itself has a contact resistance of 480 Ω to the plain PEDOT:PSS-coated ribbon, which shows a promising electrical transfer ability within the limitations of materials employed for reel-to-reel continuous processes. The microspring contact structures enhanced the durability, flexibility and stability of electrical contact in the woven textile better than those of the ribbons without the microspring. The contact experiment was repeated over 500 times, losing only 20 Ω of the resistance. Furthermore, to realize the spring structure, CYTOP is used as the releasing layer due to its low adhesive force to the fiber substrate. Moreover the first result of patternable CYTOP using nano-imprinting lithography is included.

  2. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  3. A Module Experimental Process System Development Unit (MEPSDU). [development of low cost solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per Watt peak was demonstrated. The proposed process sequence was reviewed and laboratory verification experiments were conducted. The preliminary process includes the following features: semicrystalline silicon (10 cm by 10 cm) as the silicon input material; spray on dopant diffusion source; Al paste BSF formation; spray on AR coating; electroless Ni plate solder dip metallization; laser scribe edges; K & S tabbing and stringing machine; and laminated EVA modules.

  4. Image processing system architecture using parallel arrays of digital signal processors

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hobson, Clifford A.; Hartley, David A.; Harvey, David M.

    1993-10-01

    The paper describes the requirements of a high definition, high speed image processing system. Different types of parallel architectures were considered for the system. Advantages and limitations of SIMD and MIMD architectures are briefly discussed for image processing applications. A parallel image processing system based on MIMD architecture has been developed using multiple digital signal processors which can communicate with each other through an interconnection network. Texas Instruments TMS320C40 digital signal processors have been selected because they have a powerful floating point CPU supported by fast parallel communication ports, a DMA coprocessor and two memory interfaces. A five processor system is described in the paper. The EISA bus is used as the host interface and VISION bus is used to transfer images between the processors. The system is being used for automated non-contact inspection in which electro-optic signals are processed to identify manufacturing problems.

  5. Coal liquefaction process streams characterization and evaluation: High performance liquid chromatography (HPLC) of coal liquefaction process streams using normal-phase separation with uv diode array detection

    SciTech Connect

    Clifford, D.J.; McKinney, D.E.; Hou, Lei; Hatcher, P.G.

    1994-01-01

    This study demonstrated the considerable potential of using two-dimensional, high performance liquid chromatography (HPLC) with normal-phase separation and ultraviolet (UV) diode array detection for the examination of filtered process liquids and the 850{degrees}F{sup {minus}} distillate materials derived from direct coal liquefaction process streams. A commercially available HPLC column (Hypersil Green PAH-2) provided excellent separation of the complex mixture of polynuclear aromatic hydrocarbons (PAHs) found in coal-derived process streams process. Some characteristics of the samples delineated by separation could be attributed to processing parameters. Mass recovery of the process derived samples was low (5--50 wt %). Penn State believes, however, that, improved recovery can be achieved. High resolution mass spectrometry and gas chromatography/mass spectrometry (GC/MS) also were used in this study to characterize the samples and the HPLC fractions. The GC/MS technique was used to preliminarily examine the GC-elutable portion of the samples. The GC/MS data were compared with the data from the HPLC technique. The use of an ultraviolet detector in the HPLC work precludes detecting the aliphatic portion of the sample. The GC/MS allowed for identification and quantification of that portion of the samples. Further development of the 2-D HPLC analytical method as a process development tool appears justified based on the results of this project.

  6. Multisensor image fusion guidelines in remote sensing

    NASA Astrophysics Data System (ADS)

    Pohl, C.

    2016-04-01

    Remote sensing delivers multimodal and -temporal data from the Earth's surface. In order to cope with these multidimensional data sources and to make the most of them, image fusion is a valuable tool. It has developed over the past few decades into a usable image processing technique for extracting information of higher quality and reliability. As more sensors and advanced image fusion techniques have become available, researchers have conducted a vast amount of successful studies using image fusion. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields - i.e. remote sensing, image fusion and the desired image exploitation processing. From the findings of this research it can be seen that the choice of the appropriate technique, as well as the fine-tuning of the individual parameters of this technique, is crucial. There is still a lack of strategic guidelines due to the complexity and variability of data selection, processing techniques and applications. This paper gives an overview on the state-of-the-art in remote sensing image fusion including sensors and applications. Putting research results in image fusion from the past 15 years into a context provides a new view on the subject and helps other researchers to build their innovation on these findings. Recommendations of experts help to understand further needs to achieve feasible strategies in remote sensing image fusion.

  7. Flat-plate solar array project process development area process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  8. Multi-sensor Evolution Analysis system: how WCS/WCPS technology supports real time exploitation of geospatial data

    NASA Astrophysics Data System (ADS)

    Natali, Stefano; Mantovani, Simone; Folegani, Marco; Barboni, Damiano

    2014-05-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In the third and last year of EarthServer project, the Climate Data Service lighthouse application has been released in its full / consolidated mode. The Multi-sensor Evolution Analysis (MEA) system, the geospatial data analysis tool empowered with OGC standard, has been adopted to handle data manipulation and visualization; Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) are used to access and process ESA, NASA and third party products. Tenth of Terabytes of full-mission, multi-sensor, multi-resolution, multi-projection and cross-domain coverages are already available to user interest groups belonging Land, Ocean and Atmosphere products. The MEA system is available at https://mea.eo.esa.int. During the live demo, typical test cases implemented by User interest Groups within EarthServer and ESA Image Information Mining projects will be showed with special emphasis on the comparison of MACC Reanalysis and ESA CCI products.

  9. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  10. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  11. Liquid Chromatography-diode Array Detector-electrospray Mass Spectrometry and Principal Components Analyses of Raw and Processed Moutan Cortex

    PubMed Central

    Deng, Xian-Mei; Yu, Jiang-Yong; Ding, Meng-Jin; Zhao, Ming; Xue, Xing-Yang; Che, Chun-Tao; Wang, Shu-Mei; Zhao, Bin; Meng, Jiang

    2016-01-01

    Background: Raw Moutan Cortex (RMC), derived from the root bark of Paeonia suffruticosa, and Processed Moutan Cortex (PMC) is obtained from RMC by undergoing a stir-frying process. Both of them are indicated for different pharmacodynamic action in traditional Chinese medicine, and they have been used in China and other Asian countries for thousands of years. Objective: To establish a method to study the RMC and PMC, revealing their different chemical composition by fingerprint, qualitative, and quantitative ways. Materials and Methods: High-performance liquid chromatography coupled with diode array detector and electrospray mass spectrometry (HPLC-DAD-ESIMS) were used for the analysis. Therefore, the analytes were separated on an Ultimate TM XB-C18 analytical column (250 mm × 4.6 mm, 5.0 μm) with a gradient elution program by a mobile phase consisting of acetonitrile and 0.1% (v/v) formic acid water solution. The flow rate, injection volume, detection wavelength, and column temperature were set at 1.0 mL/min, 10 μL, 254 nm, and 30°C, respectively. Besides, principal components analysis and the test of significance were applied in data analysis. Results: The results clearly showed a significant difference among RMC and PMC, indicating the significant changes in their chemical compositions before and after the stir-frying process. Conclusion: The HPLC-DAD-ESIMS coupled with chemometrics analysis could be used for comprehensive quality evaluation of raw and processed Moutan Cortex. SUMMARY The experiment study the RMC and PMC by HPLC-DAD-ESIMS couple with chemometrics analysis. The results of their fingerprints, qualitative, and quantitative all clearly showed significant changes in their chemical compositions before and after stir-frying processed. Abbreviation used: HPLC-DAD-ESIMS: High-performance Liquid Chromatography-Diode Array Detector-Electrospray Mass Spectrometry, RMC: Raw moutan cortex, PMC: Processed moutan cortex, TCM: Traditional Chinese medicine

  12. Numerical Microstructural Analysis of Automotive-Grade Steels when Joined with an Array of Welding Processes

    NASA Astrophysics Data System (ADS)

    Gould, J. E.; Khurana, S. P.; Li, T.

    2004-06-01

    Weld strength, formability, and impact resistance for joints on automotive steels is dependent on the underlying microstructure. A martensitic weld area is often a precursor to reduced mechanical performance. In this paper, efforts are made to predict underlying joint microstructures for a range of processing approaches, steel types, and gauges. This was done first by calculating cooling rates for some typical automotive processes [resistance spot welding (RSW), resistance mash seam welding (RMSEW), laser beam welding (LBW), and gas metal arc welding (GMAW)]. Then, critical cooling rates for martensite formation were calculated for a range of automotive steels using an available thermodynamically based phase transformation model. These were then used to define combinations of process type, steel type, and gauge where welds could be formed avoiding martensite in the weld area microstructure.

  13. Reaction efficiency of diffusion-controlled processes on finite aperiodic planar arrays. II. Potential effects

    NASA Astrophysics Data System (ADS)

    Garza-López, Roberto A.; Brzezinski, Jack; Low, Daniel; Gomez, Ulysses; Raju, Swaroop; Ramirez, Craig; Kozak, John J.

    2009-08-01

    We continue our study of diffusion-reaction processes on finite aperiodic lattices, viz., the Penrose lattice and a Girih tiling. Focusing on bimolecular reactions, we mobilize the theory of finite Markov processes to document the effect of attractive forces on the reaction efficiency. Considering both a short-range square-well potential and a longer-range 1/ r S ( S = 4, 6) potential, we find that irreversible reactive encounters between reactants on a Girih platelet are kinetically advantaged relative to processes on a Penrose platelet. This result generalizes the conclusion reached in our earlier study [Roberto A. Garza-López, Aaron Kaufman, Reena Patel, Joseph Chang, Jack Brzezinski, John J. Kozak, Chem. Phys. Lett. 459 (2008) 137] where entropic factors (only) were assessed.

  14. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    NASA Astrophysics Data System (ADS)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  15. Fault-tolerant solar array control using digital signal processing for peak power tracking

    SciTech Connect

    Griesbach, C.R.

    1996-12-31

    The described power system significantly improves energy conversion efficiency under Low Intensity, Low Temperature (LILT) conditions. Elements of the described DSP-based system apply directly to terrestrial solar power processing needs. Use of this system will enable increased efficiency of solar power processing in many applications that demand low power under adverse insolation conditions. Examples are portable solar-recharged communications systems, solar-powered remote telemetry stations, autonomous geological and seismological monitoring stations, portable remote field equipment, remote sight irrigation and area lighting. The feasibility of this system was evaluated by extensive computer simulation and an engineering demonstration model was designed and fabricated to verify the concept.

  16. Parallel field programmable gate array particle filtering architecture for real-time neural signal processing.

    PubMed

    Mountney, John; Silage, Dennis; Obeid, Iyad

    2010-01-01

    Both linear and nonlinear estimation algorithms have been successfully applied as neural decoding techniques in brain machine interfaces. Nonlinear approaches such as Bayesian auxiliary particle filters offer improved estimates over other methodologies seemingly at the expense of computational complexity. Real-time implementation of particle filtering algorithms for neural signal processing may become prohibitive when the number of neurons in the observed ensemble becomes large. By implementing a parallel hardware architecture, filter performance can be improved in terms of throughput over conventional sequential processing. Such an architecture is presented here and its FPGA resource utilization is reported. PMID:21096196

  17. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  18. Processing of translational and rotational motions of surface waves: performance analysis and applications to single sensor and to array measurements

    NASA Astrophysics Data System (ADS)

    Maranò, Stefano; Fäh, Donat

    2014-01-01

    The analysis of rotational seismic motions has received considerable attention in the last years. Recent advances in sensor technologies allow us to measure directly the rotational components of the seismic wavefield. Today this is achieved with improved accuracy and at an affordable cost. The analysis and the study of rotational motions are, to a certain extent, less developed than other aspects of seismology due to the historical lack of instrumental observations. This is due to both the technical challenges involved in measuring rotational motions and to the widespread belief that rotational motions are insignificant. This paper addresses the joint processing of translational and rotational motions from both the theoretical and the practical perspectives. Our attention focuses on the analysis of motions of both Rayleigh waves and Love waves from recordings of single sensors and from an array of sensors. From the theoretical standpoint, analysis of Fisher information (FI) allows us to understand how the different measurement types contribute to the estimation of quantities of geophysical interest. In addition, we show how rotational measurements resolve ambiguity on parameter estimation in the single sensor setting. We quantify the achievable estimation accuracy by means of Cramér-Rao bound (CRB). From the practical standpoint, a method for the joint processing of rotational and translational recordings to perform maximum likelihood (ML) estimation is presented. The proposed technique estimates parameters of Love waves and Rayleigh waves from single sensor or array recordings. We support and illustrate our findings with a comprehensive collection of numerical examples. Applications to real recordings are also shown.

  19. Multisensor cargo bay fire detection system

    NASA Astrophysics Data System (ADS)

    Snyder, Brian L.; Anderson, Kaare J.; Renken, Christopher H.; Socha, David M.; Miller, Mark S.

    2004-08-01

    Current aircraft cargo bay fire detection systems are generally based on smoke detection. Smoke detectors in modern aircraft are predominately photoelectric particle detectors that reliably detect smoke, but also detect dust, fog, and most other small particles. False alarms caused by these contaminants can be very costly to the airlines because they can cause flights to be diverted needlessly. To minimize these expenses, a new approach to cargo bay fire detection is needed. This paper describes a novel fire detection system developed by the Goodrich Advanced Sensors Technical Center. The system uses multiple sensors of different technologies to provide a way of discriminating between real fire events and false triggers. The system uses infrared imaging along with multiple, distributed chemical sensors and smoke detectors, all feeding data to a digital signal processor. The processor merges data from the chemical sensors, smoke detectors, and processed images to determine if a fire (or potential fire) is present. Decision algorithms look at all this data in real-time and make the final decision about whether a fire is present. In the paper, we present a short background of the problem we are solving, the reasons for choosing the technologies used, the design of the system, the signal processing methods and results from extensive system testing. We will also show that multiple sensing technologies are crucial to reducing false alarms in such systems.

  20. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  1. ALLFlight: multisensor data fusion for helicopter operations

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Lueken, T.

    2010-04-01

    The objective of the project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is to demonstrate and evaluate the characteristics of different sensors for helicopter operations within degraded visual environments, such as brownout or whiteout. The sensor suite, which is mounted onto DLR's research helicopter EC135 consists of standard color or black and white TV cameras, an un-cooled thermal infrared camera (EVS-1000, Max-Viz, USA), an optical radar scanner (HELLAS-W, EADS, Germany; a millimeter wave radar system (AI-130, ICx Radar Systems, Canada). Data processing is designed and realized by a sophisticated, high performance sensor co-computer (SCC) cluster architecture, which is installed into the helicopter's experimental electronic cargo bay. This paper describes applied methods and the software architecture in terms of real time data acquisition, recording, time stamping and sensor data fusion. First concepts for a pilot HMI are presented as well.

  2. Regional Drought Monitoring Based on Multi-Sensor Remote Sensing

    NASA Astrophysics Data System (ADS)

    Rhee, Jinyoung; Im, Jungho; Park, Seonyoung

    2014-05-01

    Drought originates from the deficit of precipitation and impacts environment including agriculture and hydrological resources as it persists. The assessment and monitoring of drought has traditionally been performed using a variety of drought indices based on meteorological data, and recently the use of remote sensing data is gaining much attention due to its vast spatial coverage and cost-effectiveness. Drought information has been successfully derived from remotely sensed data related to some biophysical and meteorological variables and drought monitoring is advancing with the development of remote sensing-based indices such as the Vegetation Condition Index (VCI), Vegetation Health Index (VHI), and Normalized Difference Water Index (NDWI) to name a few. The Scaled Drought Condition Index (SDCI) has also been proposed to be used for humid regions proving the performance of multi-sensor data for agricultural drought monitoring. In this study, remote sensing-based hydro-meteorological variables related to drought including precipitation, temperature, evapotranspiration, and soil moisture were examined and the SDCI was improved by providing multiple blends of the multi-sensor indices for different types of drought. Multiple indices were examined together since the coupling and feedback between variables are intertwined and it is not appropriate to investigate only limited variables to monitor each type of drought. The purpose of this study is to verify the significance of each variable to monitor each type of drought and to examine the combination of multi-sensor indices for more accurate and timely drought monitoring. The weights for the blends of multiple indicators were obtained from the importance of variables calculated by non-linear optimization using a Machine Learning technique called Random Forest. The case study was performed in the Republic of Korea, which has four distinct seasons over the course of the year and contains complex topography with a variety

  3. A Dry-Etch Process for Low Temperature Superconducting Transition Edge Sensors for Far Infrared Bolometer Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Christine A.; Chervenak, James A.; Hsieh, Wen-Ting; McClanahan, Richard A.; Miller, Timothy M.; Mitchell, Robert; Moseley, S. Harvey; Staguhn, Johannes; Stevenson, Thomas R.

    2003-01-01

    The next generation of ultra-low power bolometer arrays, with applications in far infrared imaging, spectroscopy and polarimetry, utilizes a superconducting bilayer as the sensing element to enable SQUID multiplexed readout. Superconducting transition edge sensors (TES s) are being produced with dual metal systems of superconductinghormal bilayers. The transition temperature (Tc) is tuned by altering the relative thickness of the superconductor with respect to the normal layer. We are currently investigating MoAu and MoCu bilayers. We have developed a dry-etching process for MoAu TES s with integrated molybdenum leads, and are working on adapting the process to MoCu. Dry etching has the advantage over wet etching in the MoAu system in that one can achieve a high degree of selectivity, greater than 10, using argon ME, or argon ion milling, for patterning gold on molybdenum. Molybdenum leads are subsequently patterned using fluorine plasma.. The dry-etch technique results in a smooth, featureless TES with sharp sidewalls, no undercutting of the Mo beneath the normal metal, and Mo leads with high critical current. The effects of individual processing parameters on the characteristics of the transition will be reported.

  4. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    NASA Technical Reports Server (NTRS)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  5. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  6. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  7. Design of a multisensor optical surface scanner

    NASA Astrophysics Data System (ADS)

    Bhatia, Gulab H.; Smith, Kirk E.; Commean, Paul K.; Whitestone, Jennifer J.; Vannier, Michael W.

    1994-10-01

    A reconfigurable, optical, 3D scanning system with sub-second acquisition of human body surface data was designed and simulated. Sensor elements (digital cameras/light beam projectors) that meet resolution, accuracy, and speed requirements are included in the system design. The sensors are interfaced to video frame grabber(s) under computer control resulting in a modular, low cost system. System operation and data processing are performed using a desktop graphics workstation. Surface data collected with this system can be oversampled to improve resolution and accuracy (viewed by overlapping camera/projector pairs). Multi- resolution data can be collected for different surfaces simultaneously or separately. Modeling and calibration of this reconfigurable system are achieved via a robust optimal estimation technique. Reconstruction software that allows seamless merging of a range data from multiple sensors has been implemented. Laser scanners that acquire body surface range data using one or two sensors require several seconds for data collection. Surface digitization of inaminate objects is feasible with such devices, but their use in human surface metrology is limited due to motion artifacts and occluded surfaces. Use of multiple, independent active sensors providing rapid collection and multi-resolution data enable sampling of complex human surface morphology not otherwise practical. 3D facial surface data has provided accurate measurements used in facial/craniofacial plastic surgery and modern personal protective equipment systems. Whole body data obtained with this new system is applicable to human factors research, medical diagnosis/treatment, and industrial design.

  8. Flat-plate solar array project process development area, process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1984-01-01

    The program is designed to investigate the fabrication of solar cells on N-type base material by a simultaneous diffusion of N-type and P-type dopants to form an P(+)NN(+) structure. The results of simultaneous diffusion experiments are being compared to cells fabricated using sequential diffusion of dopants into N-base material in the same resistivity range. The process used for the fabrication of the simultaneously diffused P(+)NN(+) cells follows the standard Westinghouse baseline sequence for P-base material except that the two diffusion processes (boron and phosphorus) are replaced by a single diffusion step. All experiments are carried out on N-type dendritic web grown in the Westinghouse pre-pilot facility. The resistivities vary from 0.5 (UC OMEGA)cm to 5 (UC OMEGA)cm. The dopant sources used for both the simultaneous and sequential diffusion experiments are commercial metallorganic solutions with phosphorus or boron components. After these liquids are applied to the web surface, they are baked to form a hard glass which acts as a diffusion source at elevated temperatures. In experiments performed thus far, cells produced in sequential diffusion tests have properties essentially equal to the baseline N(+)PP(+) cells. However, the simultaneous diffusions have produced cells with much lower IV characteristics mainly due to cross-doping of the sources at the diffusion temperature. This cross-doping is due to the high vapor pressure phosphorus (applied as a metallorganic to the back surface) diffusion through the SiO2 mask and then acting as a diffusant source for the front surface.

  9. Procrustes algorithm for multisensor track fusion

    NASA Astrophysics Data System (ADS)

    Fernandez, Manuel F.; Aridgides, Tom; Evans, John S., Jr.

    1990-10-01

    The association or "fusion" of multiple-sensor reports allows the generation of a highly accurate description of the environment by enabling efficient compression and processing of otherwise unwieldy quantities of data. Assuming that the observations from each sensor are aligned in feature space and in time, this association procedure may be executed on the basis of how well each sensor's vectors of observations match previously fused tracks. Unfortunately, distance-based algorithms alone do not suffice in those situations where match-assignments are not of an obvious nature (e.g., high target density or high false alarm rate scenarios). Our proposed approach is based on recognizing that, together, the sensors' observations and the fused tracks span a vector subspace whose dimensionality and singularity characteristics can be used to determine the total number of targets appearing across sensors. A properly constrained transformation can then be found which aligns the subspaces spanned individually by the observations and by the fused tracks, yielding the relationship existing between both sets of vectors ("Procrustes Problem"). The global nature of this approach thus enables fusing closely-spaced targets by treating them--in a manner analogous to PDA/JPDA algorithms - as clusters across sensors. Since our particular version of the Procrustes Problem consists basically of a minimization in the Total Least Squares sense, the resulting transformations associate both observations-to-tracks and tracks-to--observations. This means that the number of tracks being updated will increase or decrease depending on the number of targets present, automatically initiating or deleting "fused" tracks as required, without the need of ancillary procedures. In addition, it is implicitly assumed that both the tracker filters' target trajectory models and the sensors' observations are "noisy", yielding an algorithm robust even against maneuvering targets. Finally, owing to the fact

  10. Direct growth of comet-like superstructures of Au-ZnO submicron rod arrays by solvothermal soft chemistry process

    SciTech Connect

    Shen Liming; Bao, Ningzhong Yanagisawa, Kazumichi; Zheng, Yanqing; Domen, Kazunari; Gupta, Arunava; Grimes, Craig A.

    2007-01-15

    The synthesis, characterization and proposed growth process of a new kind of comet-like Au-ZnO superstructures are described here. This Au-ZnO superstructure was directly created by a simple and mild solvothermal reaction, dissolving the reactants of zinc acetate dihydrate and hydrogen tetrachloroaurate tetrahydrate (HAuCl{sub 4}.4H{sub 2}O) in ethylenediamine and taking advantage of the lattice matching growth between definitized ZnO plane and Au plane and the natural growth habit of the ZnO rods along [001] direction in solutions. For a typical comet-like Au-ZnO superstructure, its comet head consists of one hemispherical end of a central thick ZnO rod and an outer Au-ZnO thin layer, and its comet tail consists of radially standing ZnO submicron rod arrays growing on the Au-ZnO thin layer. These ZnO rods have diameters in range of 0.2-0.5 {mu}m, an average aspect ratio of about 10, and lengths of up to about 4 {mu}m. The morphology, size and structure of the ZnO superstructures are dependent on the concentration of reactants and the reaction time. The HAuCl{sub 4}.4H{sub 2}O plays a key role for the solvothermal growth of the comet-like superstructure, and only are ZnO fibers obtained in absence of the HAuCl{sub 4}.4H{sub 2}O. The UV-vis absorption spectrum shows two absorptions at 365-390 nm and 480-600 nm, respectively attributing to the characteristic of the ZnO wide-band semiconductor material and the surface plasmon resonance of the Au particles. - Graphical abstract: One-step solvothermal synthesis of novel comet-like superstructures of radially standing ZnO submicron rod arrays.

  11. Development of a process for high capacity arc heater production of silicon for solar arrays

    NASA Technical Reports Server (NTRS)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  12. Background Subtraction for Automated Multisensor Surveillance: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Cristani, Marco; Farenzena, Michela; Bloisi, Domenico; Murino, Vittorio

    2010-12-01

    Background subtraction is a widely used operation in the video surveillance, aimed at separating the expected scene (the background) from the unexpected entities (the foreground). There are several problems related to this task, mainly due to the blurred boundaries between background and foreground definitions. Therefore, background subtraction is an open issue worth to be addressed under different points of view. In this paper, we propose a comprehensive review of the background subtraction methods, that considers also channels other than the sole visible optical one (such as the audio and the infrared channels). In addition to the definition of novel kinds of background, the perspectives that these approaches open up are very appealing: in particular, the multisensor direction seems to be well-suited to solve or simplify several hoary background subtraction problems. All the reviewed methods are organized in a novel taxonomy that encapsulates all the brand-new approaches in a seamless way.

  13. Specification for soil multisensor and soil sampling cone penetrometer probes

    SciTech Connect

    Iwatate, D.F.

    1998-02-12

    Specification requirements for engineering, fabrication, and performance of cone penetrometer (CP) soil multisensor and sampling probes (CP-probes). Required to support contract procurement for services. The specification provides a documented technical basis of quality assurance that is required to use the probes in an operating Hanford tank farm. The documentation cited in this specification will be referenced as part of a readiness review and engineering task plan for a planned FY-1998 in-tank-farm CP-probe fielding task (demonstration). The probes discussed in this specification support the Hanford Tanks Initiative AX-104 Tank Plume Characterization Sub-task. The probes will be used to interrogate soils and vadose zone surrounding tank AX-104.

  14. Specification for soil multisensor and soil sampling cone penetrometer probes

    SciTech Connect

    Iwatate, D.F.

    1997-05-02

    Specification requirements for engineering, fabrication, and performance of cone penetrometer (CP) soil multisensor and sampling probes (CP-probes) which are required to support contract procurement for services are presented. The specification provides a documented technical basis of quality assurance that is required to use the probes in an operating Hanford tank farm. The documentation cited in this specification will be incorporated into an operational fielding plan that will address all activities associated with the use of the CP-probes. The probes discussed in this specification support the Hanford Tanks Initiative AX-104 Tank Plume Characterization Sub-task. The probes will be used to interrogate soils and vadose zone surrounding tank AX-104.

  15. Variance estimation for radiation analysis and multi-sensor fusion.

    SciTech Connect

    Mitchell, Dean James

    2010-09-01

    Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

  16. MIST Final Report: Multi-sensor Imaging Science and Technology

    SciTech Connect

    Lind, Michael A.; Medvick, Patricia A.; Foley, Michael G.; Foote, Harlan P.; Heasler, Patrick G.; Thompson, Sandra E.; Nuffer, Lisa L.; Mackey, Patrick S.; Barr, Jonathan L.; Renholds, Andrea S.

    2008-03-15

    The Multi-sensor Imaging Science and Technology (MIST) program was undertaken to advance exploitation tools for Long Wavelength Infra Red (LWIR) hyper-spectral imaging (HSI) analysis as applied to the discovery and quantification of nuclear proliferation signatures. The program focused on mitigating LWIR image background clutter to ease the analyst burden and enable a) faster more accurate analysis of large volumes of high clutter data, b) greater detection sensitivity of nuclear proliferation signatures (primarily released gasses) , and c) quantify confidence estimates of the signature materials detected. To this end the program investigated fundamental limits and logical modifications of the more traditional statistical discovery and analysis tools applied to hyperspectral imaging and other disciplines, developed and tested new software incorporating advanced mathematical tools and physics based analysis, and demonstrated the strength and weaknesses of the new codes on relevant hyperspectral data sets from various campaigns. This final report describes the content of the program and the outlines the significant results.

  17. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    NASA Astrophysics Data System (ADS)

    Lin, Ta-Wei; Chen, Chi-Feng; Yang, Jauh-Jung; Liao, Yunn-Shiuan

    2008-09-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display.

  18. Air Enquirer's multi-sensor boxes as a tool for High School Education and Atmospheric Research

    NASA Astrophysics Data System (ADS)

    Morguí, Josep-Anton; Font, Anna; Cañas, Lidia; Vázquez-García, Eusebi; Gini, Andrea; Corominas, Ariadna; Àgueda, Alba; Lobo, Agustin; Ferraz, Carlos; Nofuentes, Manel; Ulldemolins, Delmir; Roca, Alex; Kamnang, Armand; Grossi, Claudia; Curcoll, Roger; Batet, Oscar; Borràs, Silvia; Occhipinti, Paola; Rodó, Xavier

    2016-04-01

    An educational tool was designed with the aim of making more comprehensive the research done on Greenhouse Gases (GHGs) in the ClimaDat Spanish network of atmospheric observation stations (www.climadat.es). This tool is called Air Enquirer and it consist of a multi-sensor box. It is envisaged to build more than two hundred boxes to yield them to the Spanish High Schools through the Education department (www.educaixa.com) of the "Obra Social 'La Caixa'", who funds this research. The starting point for the development of the Air Enquirers was the experience at IC3 (www.ic3.cat) in the CarboSchools+ FP7 project (www.carboschools.cat, www.carboschools.eu). The Air Enquirer's multi-sensor box is based in Arduino's architecture and contains sensors for CO2, temperature, relative humidity, pressure, and both infrared and visible luminance. The Air Enquirer is designed for taking continuous measurements. Every Air Enquirer ensemble of measurements is used to convert values to standard units (water content in ppmv, and CO2 in ppmv_dry). These values are referred to a calibration made with Cavity Ring Down Spectrometry (Picarro®) under different temperature, pressure, humidity and CO2 concentrations. Multiple sets of Air Enquirers are intercalibrated for its use in parallel during the experiments. The different experiments proposed to the students will be outdoor (observational) or indoor (experimental, in the lab) focusing on understanding the biogeochemistry of GHGs in the ecosystems (mainly CO2), the exchange (flux) of gases, the organic matter production, respiration and decomposition processes, the influence of the anthropogenic activities on the gases (and particles) exchanges, and their interaction with the structure and composition of the atmosphere (temperature, water content, cooling and warming processes, radiative forcing, vertical gradients and horizontal patterns). In order to ensure Air Enquirers a high-profile research performance the experimental designs

  19. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    NASA Technical Reports Server (NTRS)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  20. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  1. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation. PMID:26510475

  2. Optimization of processing parameters on the controlled growth of ZnO nanorod arrays for the performance improvement of solid-state dye-sensitized solar cells

    SciTech Connect

    Lee, Yi-Mu; Yang, Hsi-Wen

    2011-03-15

    High-transparency and high quality ZnO nanorod arrays were grown on the ITO substrates by a two-step chemical bath deposition (CBD) method. The effects of processing parameters including reaction temperature (25-95 {sup o}C) and solution concentration (0.01-0.1 M) on the crystal growth, alignment, optical and electrical properties were systematically investigated. It has been found that these process parameters are critical for the growth, orientation and aspect ratio of the nanorod arrays, showing different structural and optical properties. Experimental results reveal that the hexagonal ZnO nanorod arrays prepared under reaction temperature of 95 {sup o}C and solution concentration of 0.03 M possess highest aspect ratio of {approx}21, and show the well-aligned orientation and optimum optical properties. Moreover the ZnO nanorod arrays based heterojunction electrodes and the solid-state dye-sensitized solar cells (SS-DSSCs) were fabricated with an improved optoelectrical performance. -- Graphical abstract: The ZnO nanorod arrays demonstrate well-alignment, high aspect ratio (L/D{approx}21) and excellent optical transmittance by low-temperature chemical bath deposition (CBD). Display Omitted Research highlights: > Investigate the processing parameters of CBD on the growth of ZnO nanorod arrays. > Optimization of CBD process parameters: 0.03 M solution concentration and reaction temperature of 95 {sup o}C. > The prepared ZnO samples possess well-alignment and high aspect ratio (L/D{approx}21). > An n-ZnO/p-NiO heterojunction: great rectifying behavior and low leakage current. > SS-DSSC has J{sub SC} of 0.31 mA/cm{sup 2} and V{sub OC} of 590 mV, and an improved {eta} of 0.059%.

  3. A Comparison of Multisensor Precipitation Estimation Methods in Complex Terrain for Flash Flood Warning and Mitigation

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Chen, H.; Chandrasekar, C. V.; Willie, D.; Reynolds, D.; Campbell, C.; Zhang, Y.; Sukovich, E.

    2012-12-01

    Investigating the uncertainties and improving the accuracy of quantitative precipitation estimation (QPE) is a critical mission of the National Oceanic and Atmospheric Administration (NOAA). QPE is extremely challenging in regions of complex terrain like the western U.S. because of the sparse coverage of ground-based radar, complex orographic precipitation processes, and the effects of beam blockages (e.g., Westrick et al. 1999). In addition, the rain gauge density in complex terrain is often inadequate to capture spatial variability in the precipitation patterns. The NOAA Hydrometeorology Testbed (HMT) conducts research on precipitation and weather conditions that can lead to flooding, and fosters transition of scientific advances and new tools into forecasting operations (see hmt.noaa.gov). The HMT program consists of a series of demonstration projects in different geographical regions to enhance understanding of region specific processes related to precipitation, including QPE. There are a number of QPE systems that are widely used across NOAA for precipitation estimation (e.g., Cifelli et al. 2011; Chandrasekar et al. 2012). Two of these systems have been installed at the NOAA Earth System Research Laboratory: Multisensor Precipitation Estimator (MPE) and National Mosaic and Multi-sensor QPE (NMQ) developed by NWS and NSSL, respectively. Both provide gridded QPE products that include radar-only, gauge-only and gauge-radar-merged, etc; however, these systems often provide large differences in QPE (in terms of amounts and spatial patterns) due to differences in Z-R selection, vertical profile of reflectivity correction, and gauge interpolation procedures. Determining the appropriate QPE product and quantification of QPE uncertainty is critical for operational applications, including water management decisions and flood warnings. For example, hourly QPE is used to correct radar based rain rates used by the Flash Flood Monitoring and Prediction (FFMP) package in

  4. Chemometric analysis of multi-sensor hyperspectral images of coarse mode aerosol particles for the image-based investigation on aerosol particles

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Kamilli, Katharina A.; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-04-01

    Multi-sensor hyperspectral imaging is a novel technique, which allows the determination of composition, chemical structure and pure components of laterally resolved samples by chemometric analysis of different hyperspectral datasets. These hyperspectral datasets are obtained by different imaging methods, analysing the same sample spot and superimposing the hyperspectral data to create a single multi-sensor dataset. Within this study, scanning electron microscopy (SEM), Raman and energy-dispersive X-ray spectroscopy (EDX) images were obtained from size-segregated aerosol particles, sampled above Western Australian salt lakes. The particles were collected on aluminum foils inside a 2350 L Teflon chamber using a Sioutas impactor, sampling aerosol particles of sizes between 250 nm and 10 µm. The complex composition of the coarse-mode particles can be linked to primary emissions of inorganic species as well as to oxidized volatile organic carbon (VOC) emissions. The oxidation products of VOC emissions are supposed to form an ultra-fine nucleation mode, which was observed during several field campaigns between 2006 and 2013. The aluminum foils were analysed using chemical imaging and electron microscopy. A Horiba LabRam 800HR Raman microscope was used for vibrational mapping of an area of about 100 µm x 100 µm of the foils at a resolution of about 1 µm. The same area was analysed using a Quanta FEI 200 electron microscope (about 250 nm resolution). In addition to the high-resolution image, the elemental composition could be investigated using energy-dispersive X-ray spectroscopy. The obtained hyperspectral images were combined into a multi-sensor dataset using the software package Imagelab (Epina Software Labs, www.imagelab.at). After pre-processing of the images, the multi-sensor hyperspectral dataset was analysed using several chemometric methods such as principal component analysis (PCA), hierarchical cluster analysis (HCA) and other multivariate methods. Vertex

  5. The Canadian Forces ILDS: a militarily fielded multisensor vehicle-mounted teleoperated landmine detection system

    NASA Astrophysics Data System (ADS)

    McFee, John E.; Russell, Kevin L.; Chesney, Robert H.; Faust, Anthony A.; Das, Yogadhish

    2006-05-01

    The Improved Landmine Detection System (ILDS) is intended to meet Canadian military mine clearance requirements in rear area combat situations and peacekeeping on roads and tracks. The system consists of two teleoperated vehicles and a command vehicle. The teleoperated protection vehicle precedes, clearing antipersonnel mines and magnetic and tilt rod-fuzed antitank mines. It consists of an armoured personnel carrier with a forward looking infrared imager, a finger plow or roller and a magnetic signature duplicator. The teleoperated detection vehicle follows to detect antitank mines. The purpose-built vehicle carries forward looking infrared and visible imagers, a 3 m wide, down-looking sensitive electromagnetic induction detector array and a 3 m wide down-looking ground probing radar, which scan the ground in front of the vehicle. Sensor information is combined using navigation sensors and custom navigation, registration, spatial correspondence and data fusion algorithms. Suspicious targets are then confirmed by a thermal neutron activation detector. The prototype, designed and built by Defence R&D Canada, was completed in October 1997. General Dynamics Canada delivered four production units, based on the prototype concept and technologies, to the Canadian Forces (CF) in 2002. ILDS was deployed in Afghanistan in 2003, making the system the first militarily fielded, teleoperated, multi-sensor vehicle-mounted mine detector and the first with a fielded confirmation sensor. Performance of the prototype in Canadian and independent US trials is summarized and recent results from the production version of the confirmation sensor are discussed. CF operations with ILDS in Afghanistan are described.

  6. Multi-sensor image interpretation using laser radar and thermal images

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau; Aggarwal, J. K.

    1991-03-01

    A knowledge based system is presented which interprets registered laser radar and thermal images. The object is to detect and recognize man-made objects at kilometer range in outdoor scenes. The multisensor fusion approach is applied to various sensing modalities (range, intensity, velocity, and thermal) to improve both image segmentation and interpretation. The ability to use multiple sensors greatly helps an intelligent platform to understand and interact with its environment. The knowledge-based interpretation system, AIMS, is constructed using KEE and Lisp. Low-level attributes of image segments (regions) are computed by the segmentation modules and then converted into the KEE format. The interpretation system applies forward chaining in a bottom-up fashion to derive object-level interpretations from data bases generated by low-level processing modules. Segments are grouped into objects and then objects are classified into predefined categories. AIMS employs a two tiered software structure. The efficiency of AIMS is enhanced by transferring nonsymbolic processing tasks to a concurrent service manager (program). Therefore, tasks with different characteristics are executed using different software tools and methodologies.

  7. Interactive multisensor image exploitation: an approach to recognition starting from target objects

    NASA Astrophysics Data System (ADS)

    Geisler, Juergen; Hardt, Marianne; Littfass, Michael; Schumacher, Wilfried

    1998-10-01

    Most common approaches to interactive object recognition in multisensor/multispectral imagery are sensor data driven. They address the problem of displaying images of multiple sensor sources in a manner adequate to the characteristics of the sensors. Fusion of sensed data is the topic of those concepts. This paper discusses a supplementing approach from the opposite end: the domain of target objects. Knowledge about the appearance of objects under various spectral conditions guides the image analyst through the interpretation process. Therefore, the basic concept of an >>interactive recognition assistant<< will be proposed. Starting from a set of candidate objects the image analyst is guided through a step-by-step interpretation process by getting indicated the respectively most significant features for efficient reduction of the candidate set. In the context of this approach we discuss the question of modeling and storing the multisensorial appearances of target objects as well as the problem of an adequate dynamic human-machine-interface that takes into account the mental model of human image interpretation.

  8. Development of a parallel detection and processing system using a multidetector array for wave field restoration in scanning transmission electron microscopy

    SciTech Connect

    Taya, Masaki; Matsutani, Takaomi; Ikuta, Takashi; Saito, Hidekazu; Ogai, Keiko; Harada, Yoshihito; Tanaka, Takeo; Takai, Yoshizo

    2007-08-15

    A parallel image detection and image processing system for scanning transmission electron microscopy was developed using a multidetector array consisting of a multianode photomultiplier tube arranged in an 8x8 square array. The system enables the taking of 64 images simultaneously from different scattered directions with a scanning time of 2.6 s. Using the 64 images, phase and amplitude contrast images of gold particles on an amorphous carbon thin film could be separately reconstructed by applying respective 8 shaped bandpass Fourier filters for each image and multiplying the phase and amplitude reconstructing factors.

  9. Development of a parallel detection and processing system using a multidetector array for wave field restoration in scanning transmission electron microscopy.

    PubMed

    Taya, Masaki; Matsutani, Takaomi; Ikuta, Takashi; Saito, Hidekazu; Ogai, Keiko; Harada, Yoshihito; Tanaka, Takeo; Takai, Yoshizo

    2007-08-01

    A parallel image detection and image processing system for scanning transmission electron microscopy was developed using a multidetector array consisting of a multianode photomultiplier tube arranged in an 8 x 8 square array. The system enables the taking of 64 images simultaneously from different scattered directions with a scanning time of 2.6 s. Using the 64 images, phase and amplitude contrast images of gold particles on an amorphous carbon thin film could be separately reconstructed by applying respective 8 shaped bandpass Fourier filters for each image and multiplying the phase and amplitude reconstructing factors. PMID:17764327

  10. Aerosol Intercomparison Scenarios for the Giovanni Multi-sensor Data Synergy “Advisor”

    NASA Astrophysics Data System (ADS)

    Lloyd, S. A.; Leptoukh, G. G.; Prados, A. I.; Shen, S.; Pan, J.; Rui, H.; Lynnes, C.; Fox, P. A.; West, P.; Zednik, S.

    2009-12-01

    The combination of remotely sensed aerosols datasets can result in synergistic products that are more useful than the sum of the individual datasets. Multi-sensor composite datasets can be constructed by data merging (taking very closely related parameters to create a single merged dataset to increase spatial and/or temporal coverage), cross-calibration (creating long-term climate data records from two very similar parameters), validation (using a parameter from one dataset to validate a closely related parameter in another), cross-comparison (comparing two datasets with different parameters), and data fusion (using two or more parameters to estimate a third parameter). However, care must be taken to note the differences in data provenance and quality when combining heterogeneous datasets. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is currently in its first year of funding for our project Multi-sensor Data Synergy Advisor (MDSA or Giovanni Advisor) under the NASA Earth Science Technology Office (ESTO) Advanced Information Systems and Technology (AIST) program. The Giovanni Advisor will allow researchers to combine and compare aerosol data from multiple sensors using Giovanni, such that scientifically and statistically valid conclusions can be drawn. The Giovanni Advisor will assist the user in determining how to match up two (or more) sets of data that are related, yet significantly different in some way: in the exact phenomenon being measured, the measurement technique, or the location in space-time and/or the quality of the measurements. Failing to account for these differences in merging, validation, cross calibration, comparison or fusion is likely to yield scientifically dubious results. The Giovanni Advisor captures details of each parameter’s attributes, metadata, retrieval heritage, provenance and data quality and flags relevant differences so that the user can make appropriate “apples to apples” comparisons of

  11. ELECTROMAGNETISM, OPTICS, ACOUSTICS, HEAT TRANSFER, CLASSICAL MECHANICS, AND FLUID DYNAMICS: Quantum Information Processing in An Array of Fiber Coupled Cavities

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zou, Jian; Shao, Bin

    2010-04-01

    We consider a fiber coupled cavity array. Each cavity is doped with a single two-level atom. By treating the atom-cavity systems as combined polaritonic qubits, we can transform it into a polaritonic qubit-qubit array in the dispersive regime. We show that the four fiber coupled cavity open chain and ring can both generate the four qubit W state and cluster state, and can both transfer one and two qubit arbitrary states. We also discuss the dynamical behaviors of the four fiber coupled cavity array with unequal couplings.

  12. Magnetic arrays

    DOEpatents

    Trumper, D.L.; Kim, W.; Williams, M.E.

    1997-05-20

    Electromagnet arrays are disclosed which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness. 12 figs.

  13. Magnetic arrays

    SciTech Connect

    Trumper, David L.; Kim, Won-jong; Williams, Mark E.

    1997-05-20

    Electromagnet arrays which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness.

  14. A vehicle mounted multi-sensor array for waste site characterization

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.; Tunnell, T.W.

    1995-02-01

    Personnel at AlliedSignal Aerospace, Kirtland Operations (formerly EG&G Energy Measurements, Kirtland Operations) and EG&G Energy Measurements, Los Alamos Operations, have successfully developed and demonstrated a number of technologies which can be applied to the environmental remediation and waste management problem. These applications have included the development of self-contained and towed remote sensing platforms and advanced signal analysis techniques for the detection and characterization of subsurface features. This presentation will provide a brief overview of applications that have been and are currently being fielded by both AlliedSignal and EG&G Energy Measurements personnel and will describe some of the ways that such technologies can and are being used for the detection and characterization of hazardous waste sites.

  15. Sampled Longest Common Prefix Array

    NASA Astrophysics Data System (ADS)

    Sirén, Jouni

    When augmented with the longest common prefix (LCP) array and some other structures, the suffix array can solve many string processing problems in optimal time and space. A compressed representation of the LCP array is also one of the main building blocks in many compressed suffix tree proposals. In this paper, we describe a new compressed LCP representation: the sampled LCP array. We show that when used with a compressed suffix array (CSA), the sampled LCP array often offers better time/space trade-offs than the existing alternatives. We also show how to construct the compressed representations of the LCP array directly from a CSA.

  16. 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  17. 3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  18. a Meteorological Risk Assessment Method for Power Lines Based on GIS and Multi-Sensor Integration

    NASA Astrophysics Data System (ADS)

    Lin, Zhiyong; Xu, Zhimin

    2016-06-01

    Power lines, exposed in the natural environment, are vulnerable to various kinds of meteorological factors. Traditional research mainly deals with the influence of a single meteorological condition on the power line, which lacks of comprehensive effects evaluation and analysis of the multiple meteorological factors. In this paper, we use multiple meteorological monitoring data obtained by multi-sensors to implement the meteorological risk assessment and early warning of power lines. Firstly, we generate meteorological raster map from discrete meteorological monitoring data using spatial interpolation. Secondly, the expert scoring based analytic hierarchy process is used to compute the power line risk index of all kinds of meteorological conditions and establish the mathematical model of meteorological risk. By adopting this model in raster calculator of ArcGIS, we will have a raster map showing overall meteorological risks for power line. Finally, by overlaying the power line buffer layer to that raster map, we will get to know the exact risk index around a certain part of power line, which will provide significant guidance for power line risk management. In the experiment, based on five kinds of observation data gathered from meteorological stations in Guizhou Province of China, including wind, lightning, rain, ice, temperature, we carry on the meteorological risk analysis for the real power lines, and experimental results have proved the feasibility and validity of our proposed method.

  19. Multi-Sensor Observations of Earthquake Related Atmospheric Signals over Major Geohazard Validation Sites

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Davindenko, D.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, ionospheric electron density, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.

  20. Assessment of environmental quality of Bucharest urban area by multisensor satellite data

    NASA Astrophysics Data System (ADS)

    Zoran, Maria A.; Zoran, Liviu Florin V.

    2004-10-01

    Urban environmental quality is an important part of efficient urban environment planning and management. A scientific management system for protection, conservation and restoration must be based on reliable information on bio-geophysical and geomorphologic, dynamics processes, and climatic change effects. Synergetic use of quasi-simultaneously acquired multi-sensor data may therefore allow for a better approach of change detection and environmental impact classification and assessment in urban area. As is difficult to quantify the environmental impacts of human and industrial activities in urban areas , often many different indicators can conflict with each other. The spatial and temporal distribution of land cover is a fundamental dataset for urban ecological research. Based on Landsat TM, ETM, SPOT and SAR data for Bucharest metropolitan area in Romania, it was performed a land cover classification based on spectral signatures of different terrain features used to separate surface units of urban and sub-urban area . A complete set of criteria to evaluate and examine the urban environmental quality, including the air pollution condition indicators, water pollution indicators, solid waste treated indicators, noise pollution indicators, urban green space have been widely used .

  1. A multi-sensor remote sensing approach for measuring primary production from space

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine

    1989-01-01

    It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.

  2. Imaging Rupture Process of the 2015 Mw 8.3 Illapel Earthquake Using the US Seismic Array

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ghosh, Abhijit

    2016-07-01

    We study the rupture process of the Mw 8.3 Illapel, Chile earthquake that occurred on 16 September 2015 using the US seismic network as a large aperture array. We apply the back-projection technique using two frequency bands, 0.1-0.5 and 0.25-1 Hz. Both frequency bands reveal that this event is characterized by rupture of three patches. The higher frequency band shows an earlier burst of seismic radiation and illuminates a relatively down-dip patch of energy radiation. On the other hand, the lower frequency band shows a more up-dip rupture and matches well with the slip inversion model in other studies. The Illapel earthquake ruptures about 100-km along-strike, and shows 40-km up-dip and 40-km down-dip movement along the subduction megathrust fault. The earthquake first ruptures around the epicenter with a relatively low level of seismic radiation. Then, it propagates northeast along the Juan Femandez Ridge (JFR) to rupture a patch down-dip accompanied by strong higher frequency seismic radiation. Finally, it ruptures to the northwest of the epicenter and terminates south of the Challenger fracture zone (CFZ), releasing a burst of strong lower frequency seismic radiation. Most of the aftershocks are either within or at the edge of the rupture patch, a region characterized by high coupling in central Chile. The rupture is bounded along-strike by two fractures zones to the north and south. The JFR to the south of the rupture zone may have acted as a barrier along-strike, leaving the area south of the mainshock vulnerable for a large damaging earthquake in the near future.

  3. Imaging Rupture Process of the 2015 Mw 8.3 Illapel Earthquake Using the US Seismic Array

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ghosh, Abhijit

    2016-06-01

    We study the rupture process of the Mw 8.3 Illapel, Chile earthquake that occurred on 16 September 2015 using the US seismic network as a large aperture array. We apply the back-projection technique using two frequency bands, 0.1-0.5 and 0.25-1 Hz. Both frequency bands reveal that this event is characterized by rupture of three patches. The higher frequency band shows an earlier burst of seismic radiation and illuminates a relatively down-dip patch of energy radiation. On the other hand, the lower frequency band shows a more up-dip rupture and matches well with the slip inversion model in other studies. The Illapel earthquake ruptures about 100-km along-strike, and shows 40-km up-dip and 40-km down-dip movement along the subduction megathrust fault. The earthquake first ruptures around the epicenter with a relatively low level of seismic radiation. Then, it propagates northeast along the Juan Femandez Ridge (JFR) to rupture a patch down-dip accompanied by strong higher frequency seismic radiation. Finally, it ruptures to the northwest of the epicenter and terminates south of the Challenger fracture zone (CFZ), releasing a burst of strong lower frequency seismic radiation. Most of the aftershocks are either within or at the edge of the rupture patch, a region characterized by high coupling in central Chile. The rupture is bounded along-strike by two fractures zones to the north and south. The JFR to the south of the rupture zone may have acted as a barrier along-strike, leaving the area south of the mainshock vulnerable for a large damaging earthquake in the near future.

  4. The New Pelagic Operational Observatory of the Catalan Sea (OOCS) for the Multisensor Coordinated Measurement of Atmospheric and Oceanographic Conditions

    PubMed Central

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39′N, 2°54′E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the

  5. The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the multisensor coordinated measurement of atmospheric and oceanographic conditions.

    PubMed

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39'N, 2°54'E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the coordinated

  6. A contour-based approach to multisensor image registration.

    PubMed

    Li, H; Manjunath, B S; Mitra, S K

    1995-01-01

    Image registration is concerned with the establishment of correspondence between images of the same scene. One challenging problem in this area is the registration of multispectral/multisensor images. In general, such images have different gray level characteristics, and simple techniques such as those based on area correlations cannot be applied directly. On the other hand, contours representing region boundaries are preserved in most cases. The authors present two contour-based methods which use region boundaries and other strong edges as matching primitives. The first contour matching algorithm is based on the chain-code correlation and other shape similarity criteria such as invariant moments. Closed contours and the salient segments along the open contours are matched separately. This method works well for image pairs in which the contour information is well preserved, such as the optical images from Landsat and Spot satellites. For the registration of the optical images with synthetic aperture radar (SAR) images, the authors propose an elastic contour matching scheme based on the active contour model. Using the contours from the optical image as the initial condition, accurate contour locations in the SAR image are obtained by applying the active contour model. Both contour matching methods are automatic and computationally quite efficient. Experimental results with various kinds of image data have verified the robustness of the algorithms, which have outperformed manual registration in terms of root mean square error at the control points. PMID:18289982

  7. Multi-sensor integration for unmanned terrain modeling

    NASA Astrophysics Data System (ADS)

    Sukumar, Sreenivas R.; Yu, Sijie; Page, David L.; Koschan, Andreas F.; Abidi, Mongi A.

    2006-05-01

    State-of-the-art unmanned ground vehicles are capable of understanding and adapting to arbitrary road terrain for navigation. The robotic mobility platforms mounted with sensors detect and report security concerns for subsequent action. Often, the information based on the localization of the unmanned vehicle is not sufficient for deploying army resources. In such a scenario, a three dimensional (3D) map of the area that the ground vehicle has surveyed in its trajectory would provide a priori spatial knowledge for directing resources in an efficient manner. To that end, we propose a mobile, modular imaging system that incorporates multi-modal sensors for mapping unstructured arbitrary terrain. Our proposed system leverages 3D laser-range sensors, video cameras, global positioning systems (GPS) and inertial measurement units (IMU) towards the generation of photo-realistic, geometrically accurate, geo-referenced 3D terrain models. Based on the summary of the state-of-the-art systems, we address the need and hence several challenges in the real-time deployment, integration and visualization of data from multiple sensors. We document design issues concerning each of these sensors and present a simple temporal alignment method to integrate multi-sensor data into textured 3D models. These 3D models, in addition to serving as a priori for path planning, can also be used in simulators that study vehicle-terrain interaction. Furthermore, we show our 3D models possessing the required accuracy even for crack detection towards road surface inspection in airfields and highways.

  8. Multi-sensor Testing for Automated Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Carrington, Connie K.

    2008-01-01

    During the past two years, many sensors have been tested in an open-loop fashion in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL) to both determine their suitability for use in Automated Rendezvous and Docking (AR&D) systems and to ensure the test facility is prepared for future multi-sensor testing. The primary focus of this work was in support of the CEV AR&D system, because the AR&D sensor technology area was identified as one of the top risks in the program. In 2006, four different sensors were tested individually or in a pair in the MSFC FRL. In 2007, four sensors, two each of two different types, were tested simultaneously. In each set of tests, the target was moved through a series of pre-planned trajectories while the sensor tracked it. In addition, a laser tracker "truth" sensor also measured the target motion. The tests demonstrated the functionality of testing four sensors simultaneously as well as the capabilities (both good and bad) of all of the different sensors tested. This paper outlines the test setup and conditions, briefly describes the facility, summarizes the earlier results of the individual sensor tests, and describes in some detail the results of the four-sensor testing. Post-test analysis includes data fusion by minimum variance estimation and sequential Kalman filtering. This Sensor Technology Project work was funded by NASA's Exploration Technology Development Program.

  9. Evaluating fusion techniques for multi-sensor satellite image data

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data. Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.

  10. Reliable sources and uncertain decisions in multisensor systems

    NASA Astrophysics Data System (ADS)

    Minor, Christian; Johnson, Kevin

    2015-05-01

    Conflict among information sources is a feature of fused multisource and multisensor systems. Accordingly, the subject of conflict resolution has a long history in the literature of data fusion algorithms such as that of Dempster-Shafer theory (DS). Most conflict resolution strategies focus on distributing the conflict among the elements of the frame of discernment (the set of hypotheses that describe the possible decisions for which evidence is obtained) through rescaling of the evidence. These "closed-world" strategies imply that conflict is due to the uncertainty in evidence sources stemming from their reliability. An alternative approach is the "open-world" hypothesis, which allows for the presence of "unknown" elements not included in the original frame of discernment. Here, conflict must be considered as a result of uncertainty in the frame of the discernment, rather than solely the province of evidence sources. Uncertainty in the operating environment of a fused system is likely to appear as an open-world scenario. Understanding the origin of conflict (source versus frame of discernment uncertainty) is a challenging area for research in fused systems. Determining the ratio of these uncertainties provides useful insights into the operation of fused systems and confidence in their decisions for a variety of operating environments. Results and discussion for the computation of these uncertainties are presented for several combination rules with simulated data sets.

  11. Fast virtual shadow projection system as part of a virtual multisensor assistance system

    NASA Astrophysics Data System (ADS)

    Haskamp, Klaus; Kästner, Markus; Reithmeier, Eduard

    2011-05-01

    The quality test is one of the main components of a production process. The main task of the quality test is the inspection of the relevant geometry parts concerning the predefined tolerance range. To verify, that the relevant geometry parts can be detected with a measurement uncertainty, which is less than the predefined tolerance range, multiple measurements of an appropriate reference specimen have to be done. The related time and money effort is very high and can be reduced using a numerical simulation of the whole measurement process. The measurement uncertainty can be estimated using a virtual measurement process and Monte-Carlo methods. Using the combination of the simulation and Monte-Carlo methods, it is, for example, possible, to calculate the optimal alignment of the workpiece within the measurement volume. Thereby, the optimality criterion can be defined as a minimum of the measurement uncertainty or as a hollistic measurement. In addition to the estimation of uncertainties, it is possible to use the virtual measurement system as part of an assistance system. The assistance system should provide measurement strategies with respect to different criteria, like the minimisation of the measurement uncertainty. This is the field of research of the subproject B5 of the collaborative research centre 489 (CRC 489), funded by the German Research Foundation (DFG). The main task is the setup of a virtual multisensor assistance system for the calculation of optimised work piece adapted measurement strategies. In this paper the virtual measurement system and process of a shadow projection system will be explained in detail. Besides the mathematical model, a verification of the simulation and a concept for the estimation of measurement uncertainties will be given.

  12. Reconfigurable mosaic annular arrays.

    PubMed

    Thomenius, Kai E; Wodnicki, Robert; Cogan, Scott D; Fisher, Rayette A; Burdick, Bill; Smith, L Scott; Khuri-Yakub, Pierre; Lin, Der-Song; Zhuang, Xuefeng; Bonitz, Barry; Davies, Todd; Thomas, Glen; Woychik, Charles

    2014-07-01

    Mosaic annular arrays (MAA) based on reconfigurable array (RA) transducer electronics assemblies are presented as a potential solution for future highly integrated ultrasonic transducer subsystems. Advantages of MAAs include excellent beam quality and depth of field resulting from superior elevational focus compared with 1-D electronically scanned arrays, as well as potentially reduced cost, size, and power consumption resulting from the use of a limited number of beamforming channels for processing a large number of subelements. Specific design tradeoffs for these highly integrated arrays are discussed in terms of array specifications for center frequency, element pitch, and electronic switch-on resistance. Large-area RAs essentially function as RC delay lines. Efficient architectures which take into account RC delay effects are presented. Architectures for integration of the transducer and electronics layers of large-area array implementations are reviewed. PMID:24960699

  13. Kokkos Array

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    The Kokkos Array library implements shared-memory array data structures and parallel task dispatch interfaces for data-parallel computational kernels that are performance-portable to multicore-CPU and manycore-accelerator (e.g., GPGPU) devices.

  14. Systolic arrays

    SciTech Connect

    Moore, W.R.; McCabe, A.P.H.; Vrquhart, R.B.

    1987-01-01

    Selected Contents of this book are: Efficient Systolic Arrays for the Solution of Toeplitz Systems, The Derivation and Utilization of Bit Level Systolic Array Architectures, an Efficient Systolic Array for Distance Computation Required in a Video-Codec Based Motion-Detection, On Realizations of Least-Squares Estimation and Kalman Filtering by Systolic Arrays, and Comparison of Systolic and SIMD Architectures for Computer Vision Computations.

  15. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2007-03-13

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  16. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2009-08-11

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  17. Large-Scale Precise Printing of Ultrathin Sol-Gel Oxide Dielectrics for Directly Patterned Solution-Processed Metal Oxide Transistor Arrays.

    PubMed

    Lee, Won-June; Park, Won-Tae; Park, Sungjun; Sung, Sujin; Noh, Yong-Young; Yoon, Myung-Han

    2015-09-01

    Ultrathin and dense metal oxide gate di-electric layers are reported by a simple printing of AlOx and HfOx sol-gel precursors. Large-area printed indium gallium zinc oxide (IGZO) thin-film transistor arrays, which exhibit mobilities >5 cm(2) V(-1) s(-1) and gate leakage current of 10(-9) A cm(-2) at a very low operation voltage of 2 V, are demonstrated by continuous simple bar-coated processes. PMID:26222338

  18. Statistical generation of training sets for measuring NO3(-), NH4(+) and major ions in natural waters using an ion selective electrode array.

    PubMed

    Mueller, Amy V; Hemond, Harold F

    2016-05-18

    Knowledge of ionic concentrations in natural waters is essential to understand watershed processes. Inorganic nitrogen, in the form of nitrate and ammonium ions, is a key nutrient as well as a participant in redox, acid-base, and photochemical processes of natural waters, leading to spatiotemporal patterns of ion concentrations at scales as small as meters or hours. Current options for measurement in situ are costly, relying primarily on instruments adapted from laboratory methods (e.g., colorimetric, UV absorption); free-standing and inexpensive ISE sensors for NO3(-) and NH4(+) could be attractive alternatives if interferences from other constituents were overcome. Multi-sensor arrays, coupled with appropriate non-linear signal processing, offer promise in this capacity but have not yet successfully achieved signal separation for NO3(-) and NH4(+)in situ at naturally occurring levels in unprocessed water samples. A novel signal processor, underpinned by an appropriate sensor array, is proposed that overcomes previous limitations by explicitly integrating basic chemical constraints (e.g., charge balance). This work further presents a rationalized process for the development of such in situ instrumentation for NO3(-) and NH4(+), including a statistical-modeling strategy for instrument design, training/calibration, and validation. Statistical analysis reveals that historical concentrations of major ionic constituents in natural waters across New England strongly covary and are multi-modal. This informs the design of a statistically appropriate training set, suggesting that the strong covariance of constituents across environmental samples can be exploited through appropriate signal processing mechanisms to further improve estimates of minor constituents. Two artificial neural network architectures, one expanded to incorporate knowledge of basic chemical constraints, were tested to process outputs of a multi-sensor array, trained using datasets of varying degrees of

  19. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Hua, H.

    2012-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  20. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  1. BreedVision — A Multi-Sensor Platform for Non-Destructive Field-Based Phenotyping in Plant Breeding

    PubMed Central

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C.; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  2. BreedVision--a multi-sensor platform for non-destructive field-based phenotyping in plant breeding.

    PubMed

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  3. Formation of an array of ordered nanocathodes based on carbon nanotubes by nanoimprint lithography and PECVD processes

    SciTech Connect

    Gromov, D. G.; Shulyat’ev, A. S. Egorkin, V. I.; Zaitsev, A. A.; Skorik, S. N.; Galperin, V. A.; Pavlov, A. A.; Shamanaev, A. A.

    2014-12-15

    Technology for the production of an array of ordered nanoemitters based on carbon nanotubes is developed. The technological parameters of the fabrication of carbon nanotubes are chosen. It is shown that the structures produced exhibit field electron emission with an emission current of 8 μA and a threshold voltage of 80 V.

  4. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities`` lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ``detail enhancement,`` wherein the relative information content of the original images is less rich than the desired representation; (2) ``data enhancement,`` wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ``conceptual enhancement,`` wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  5. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  6. Electronic tongue based on an array of metallic potentiometric sensors.

    PubMed

    Lvova, Larisa; Martinelli, Eugenio; Mazzone, Emiliano; Pede, Andrea; Paolesse, Roberto; Di Natale, Corrado; D'Amico, Arnaldo

    2006-11-15

    An electronic tongue system based on the array of six metallic potentiometric sensors (metallic wires) was developed and utilized for discrimination of foodstuffs: several types of vinegar and fruit juices. Copper, tin, iron, aluminum, brass and stainless steel wires were included in the array and supplemented by pH glass electrode. The response of potentiometric metallic sensors towards various organic acids has been studied and possible sensitivity mechanisms were discussed. Overall potential changes of metallic sensors were exanimate as complex mixed signals influenced by several components presenting in analyte employing chemometric approach. The multisensor array of such a type can be useful for several applications since of simplicity in handling, low cost of sensors and easy measure procedure. PMID:18970847

  7. Automated calibration methods for robotic multisensor landmine detection

    NASA Astrophysics Data System (ADS)

    Keranen, Joe G.; Miller, Jonathan; Schultz, Gregory; Topolosky, Zeke

    2007-04-01

    Both force protection and humanitarian demining missions require efficient and reliable detection and discrimination of buried anti-tank and anti-personnel landmines. Widely varying surface and subsurface conditions, mine types and placement, as well as environmental regimes challenge the robustness of the automatic target recognition process. In this paper we present applications created for the U.S. Army Nemesis detection platform. Nemesis is an unmanned rubber-tracked vehicle-based system designed to eradicate a wide variety of anti-tank and anti-personnel landmines for humanitarian demining missions. The detection system integrates advanced ground penetrating synthetic aperture radar (GPSAR) and electromagnetic induction (EMI) arrays, highly accurate global and local positioning, and on-board target detection/classification software on the front loader of a semi-autonomous UGV. An automated procedure is developed to estimate the soil's dielectric constant using surface reflections from the ground penetrating radar. The results have implications not only for calibration of system data acquisition parameters, but also for user awareness and tuning of automatic target recognition detection and discrimination algorithms.

  8. Multi-Sensor Analysis of Overshooting Tops in Tornadic Storms

    NASA Astrophysics Data System (ADS)

    Magee, N. B.; Goldberg, R.; Hartline, M.

    2012-12-01

    The disastrous 2011 tornado season focused much attention on the ~75% false alarm rate for NWS-issued tornado warnings. Warnings are correctly issued on ~80% of verified tornados, but the false alarm rate has plateaued at near 75%. Any additional clues that may signal tornadogenesis would be of great benefit to the public welfare. We have performed statistical analyses of the structure and time-evolution of convective overshooting tops for tornadic storms occurring in the continental United States since 2006. An amalgam of case studies and theory has long suggested that overshooting tops may often collapse just prior to the onset of tornado touchdown. Our new results suggest that this view is supported by a broad set of new statistical evidence. Our approach to the analysis makes use of a high resolution, multi-sensor data set, and seeks to gather statistics on a large set of storms. Records of 88-D NEXRAD radar Enhanced-Resolution Echo Tops (product available since 2009) have been analyzed for an hour prior to and following touchdown of all EF1 and stronger storms. In addition, a coincidence search has been performed for the NASA A-Train satellite suite and tornadic events since 2006. Although the paths of the polar-orbiting satellites do not aid in analyses of temporal storm-top evolution, Aqua-MODIS, CALIPSO, and Cloud-Sat have provided a detailed structural picture of overshooting tops in tornadic and non-tornadic supercell thunderstorms. 250 m resolution AQUA-MODIS image at 1950Z on 4/27/2011, color-enhanced to emphasize overshooting tops during tornado outbreak.

  9. Economical custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.; Smith, A.; Ramondetta, P.; Noto, R.; Lombardi, T.

    1976-01-01

    Automatic design technique uses standard circuit cells for producing large-scale integrated arrays. Computerized fabrication process provides individual cells of high density and efficiency, quick turnaround time, low cost, and ease of corrections for changes and errors.

  10. Detection of multiple airborne targets from multisensor data

    NASA Astrophysics Data System (ADS)

    Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf

    1995-08-01

    Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.

  11. Multi-sensor approach to retrieving water cloud physical properties and drizzle fraction

    NASA Astrophysics Data System (ADS)

    Prianto Rusli, Stephanie; Donovan, David; Russchenberg, Herman

    2015-04-01

    Accurately representing clouds and their interaction with the surrounding matter and radiation are one of the most important factors in climate modeling. In particular, feedback processes involving low level water clouds play a significant role in determining the net effect of cloud climate forcing. An accurate description of cloud physical properties is therefore necessary to quantify these processes and their implications. To this end, measurements combined from a variety of remote sensing instruments at different wavelengths provide crucial information about the clouds. To exploit this, building upon previous work in this field, we have developed a ground-based multi-sensor retrieval algorithm within an optimal estimation framework. The inverse problem of 'translating' the radar, lidar, and microwave radiometer measurements into retrieval products is formulated in a physically consistent manner, without relying on approximate empirical proxies (such as explicit liquid water content vs radar reflectivity factor relationships). We apply the algorithm to synthetic signals based on the output of large eddy simulation model runs and present here the preliminary results. Given temperature, humidity profiles, information from the measurements, and apriori contraints, we derive the liquid water content profile. Assuming a monomodal gamma droplet size distribution, the number concentration, effective size of the cloud droplets and the extinction coefficient are computed. The retrieved profiles provide a good fit to the true ones. The algorithm is being improved to take into account the presence of drizzle, an important aspect that affects cloud lifetime. Quantifying the amount of drizzle would enable the proper use of the radar reflectivity. Further development to allow retrieval of temperature and humidity profiles as well is anticipated.

  12. Adaptive Multi-sensor Data Fusion Model for In-situ Exploration of Mars

    NASA Astrophysics Data System (ADS)

    Schneiderman, T.; Sobron, P.

    2014-12-01

    Laser Raman spectroscopy (LRS) and laser-induced breakdown spectroscopy (LIBS) can be used synergistically to characterize the geochemistry and mineralogy of potential microbial habitats and biosignatures. The value of LRS and LIBS has been recognized by the planetary science community: (i) NASA's Mars2020 mission features a combined LRS-LIBS instrument, SuperCam, and an LRS instrument, SHERLOC; (ii) an LRS instrument, RLS, will fly on ESA's 2018 ExoMars mission. The advantages of combining LRS and LIBS are evident: (1) LRS/LIBS can share hardware components; (2) LIBS reveals the relative concentration of major (and often trace) elements present in a sample; and (3) LRS yields information on the individual mineral species and their chemical/structural nature. Combining data from LRS and LIBS enables definitive mineral phase identification with precise chemical characterization of major, minor, and trace mineral species. New approaches to data processing are needed to analyze large amounts of LRS+LIBS data efficiently and maximize the scientific return of integrated measurements. Multi-sensor data fusion (MSDF) is a method that allows for robust sample identification through automated acquisition, processing, and combination of data. It optimizes information usage, yielding a more robust characterization of a target than could be acquired through single sensor use. We have developed a prototype fuzzy logic adaptive MSDF model aimed towards the unsupervised characterization of Martian habitats and their biosignatures using LRS and LIBS datasets. Our model also incorporates fusion of microimaging (MI) data - critical for placing analyses in geological and spatial context. Here, we discuss the performance of our novel MSDF model and demonstrate that automated quantification of the salt abundance in sulfate/clay/phyllosilicate mixtures is possible through data fusion of collocated LRS, LIBS, and MI data.

  13. A high speed networked signal processing platform for multi-element radio telescopes

    NASA Astrophysics Data System (ADS)

    Prasad, Peeyush; Subrahmanya, C. R.

    2011-08-01

    A new architecture is presented for a Networked Signal Processing System (NSPS) suitable for handling the real-time signal processing of multi-element radio telescopes. In this system, a multi-element radio telescope is viewed as an application of a multi-sensor, data fusion problem which can be decomposed into a general set of computing and network components for which a practical and scalable architecture is enabled by current technology. The need for such a system arose in the context of an ongoing program for reconfiguring the Ooty Radio Telescope (ORT) as a programmable 264-element array, which will enable several new observing capabilities for large scale surveys on this mature telescope. For this application, it is necessary to manage, route and combine large volumes of data whose real-time collation requires large I/O bandwidths to be sustained. Since these are general requirements of many multi-sensor fusion applications, we first describe the basic architecture of the NSPS in terms of a Fusion Tree before elaborating on its application for the ORT. The paper addresses issues relating to high speed distributed data acquisition, Field Programmable Gate Array (FPGA) based peer-to-peer networks supporting significant on-the fly processing while routing, and providing a last mile interface to a typical commodity network like Gigabit Ethernet. The system is fundamentally a pair of two co-operative networks, among which one is part of a commodity high performance computer cluster and the other is based on Commercial-Off The-Shelf (COTS) technology with support from software/firmware components in the public domain.

  14. Enabling more capability within smaller pixels: advanced wafer-level process technologies for integration of focal plane arrays with readout electronics

    NASA Astrophysics Data System (ADS)

    Temple, Dorota S.; Vick, Erik P.; Lueck, Matthew R.; Malta, Dean; Skokan, Mark R.; Masterjohn, Christopher M.; Muzilla, Mark S.

    2014-05-01

    Over the past decade, the development of infrared focal plane arrays (FPAs) has seen two trends: decreasing of the pixel size and increasing of signal-processing capability at the device level. Enabling more capability within smaller pixels can be achieved through the use of advanced wafer-level processes for the integration of FPAs with silicon (Si) readout integrated circuits (ROICs). In this paper, we review the development of these wafer-level integration technologies, highlighting approaches in which the infrared sensor is integrated with three-dimensional ROIC stacks composed of multiple layers of Si circuitry interconnected using metal-filled through-silicon vias.

  15. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    NASA Astrophysics Data System (ADS)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  16. Mapping coupled fluxes of carbon and water through multi-sensor data fusion

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In an effort to improve water resource management, drought monitoring, and agriculture assessment capabilities, a multi-sensor and multi-scale framework for assessing land-surface fluxes of energy and water at field to regional scales has been established. The framework employs the ALEXI (Atmosphere...

  17. COMPARISON OF PLOT SCALE AVERAGE GRAVIMETRIC SOIL WATER CONTENTS WITH DATA FROM CALIBRATED MULTISENSOR CAPACITANCE PROBES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multisensor capacitance probes (MCPs) provide unparalleled spatial and temporal resolution to soil water content measurements. They are utilized in many applications where soil water availability needs monitoring. The objective of this work was to assess errors in plot scale soil volumetric water co...

  18. Multi-sensor Data Fusion for Improved Prediction of Apple Fruit Firmness and Soluble Solids Content

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several nondestructive technologies have been developed for assessing the firmness and soluble solids content (SSC) of apples. Each of these technologies has its merits and limitations in predicting these quality parameters. With the concept of multi-sensor data fusion, different sensors would work ...

  19. How Well Do Data from Multisensor Capacitance Probes Represent Plot-Scale-Average Soil Water Contents?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multisensor capacitance probes have shown great promises in irrigation scheduling, evaluating water needs of plants, estimating soil hydraulic properties, estimating groundwater recharge and infiltration losses, and other soil water-related fields. It is often beneficial to know how representative a...

  20. Laboratory evaluation of dual-frequency multisensor capacitance probes to monitor soil water and salinity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Real-time information on salinity levels and transport of fertilizers are generally missing from soil profile knowledge bases. A dual-frequency multisensor capacitance probe (MCP) is now commercially available for sandy soils that simultaneously monitor volumetric soil water content (VWC, ') and sa...

  1. Newtonian Imperialist Competitve Approach to Optimizing Observation of Multiple Target Points in Multisensor Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Afghan-Toloee, A.; Heidari, A. A.; Joibari, Y.

    2013-09-01

    The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP). The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm) is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  2. Flexible 3D reconstruction method based on phase-matching in multi-sensor system.

    PubMed

    Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng

    2016-04-01

    Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020

  3. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  4. Integration of fiber-optic sensor arrays into a multi-modal tactile sensor processing system for robotic end-effectors.

    PubMed

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  5. Thickness-controlled synthesis of vertically aligned c-axis oriented ZnO nanorod arrays: Effect of growth time via novel dual sonication sol-gel process

    NASA Astrophysics Data System (ADS)

    Firdaus Malek, Mohd; Hafiz Mamat, Mohamad; Soga, Tetsuo; Rahman, Saadah Abdul; Abu Bakar, Suriani; Syakirin Ismail, Ahmad; Mohamed, Ruziana; Alrokayan, Salman A. H.; Khan, Haseeb A.; Rusop Mahmood, Mohamad

    2016-01-01

    Zinc-oxide (ZnO) nanorod arrays were successfully prepared by using dual sonication sol-gel process. Field emission scanning electron microscopy revealed that the nanorods exhibited a hexagonal structure with a flat-end facet. The nanorods displayed similar surface morphologies and grew uniformly on the seed layer substrate, with the average diameter slightly increasing to the range of 65 to 80 nm after being immersed for varying growth times. Interestingly, thickness measurements indicated that the thicknesses of the samples increased as the growth time was extended. In addition, the X-ray diffraction spectra indicated that the prepared ZnO nanorods with a hexagonal wurtzite structure grew preferentially along the c-axis. Therefore, we can conclude that the diameter, length, and orientation of the ZnO nanorod arrays along the c-axis are controllable by adjusting the growth time, motivating us to further explore the growth mechanisms of ZnO nanorods.

  6. Process of in situ forming well-aligned zinc oxide nanorod arrays on wood substrate using a two-step bottom-up method.

    PubMed

    Liu, Yongzhuang; Fu, Yanchun; Yu, Haipeng; Liu, Yixing

    2013-10-01

    A good nanocrystal covering layer on wood can serve as a protective coating and present some new surface properties. In this study, well-aligned ZnO nanorods (NRs) arrays were successfully grown on wood surface through a two-step bottom-up growth process. The process involved pre-sow seeds and subsequently their growing into NRs under hydrothermal environment. The interface incorporation between wood and ZnO colloid particles in the precursor solution during the seeding process was analyzed and demonstrated through a schematic. The growth process of forming well-aligned ZnO NRs was analyzed by field-emission scanning electron microscopy and X-ray diffraction, which showed that the NRs elongated with increased reaction time. The effects of ZnO crystal form and capping agent on the growth process were studied through different viewpoints. PMID:23880522

  7. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  8. Diode Laser Arrays

    NASA Astrophysics Data System (ADS)

    Botez, Dan; Scifres, Don R.

    1994-08-01

    This book provides a comprehensive overview of the fundamental principles and applications of semiconductor diode laser arrays. All of the major types of arrays are discussed in detail, including coherent, incoherent, edge- and surface-emitting, horizontal- and vertical-cavity, individually addressed, lattice- matched and strained-layer systems. The initial chapters cover such topics as lasers, amplifiers, external-cavity control, theoretical modeling, and operational dynamics. Spatially incoherent arrays are then described in detail, and the uses of vertical-cavity surface emitter and edge-emitting arrays in parallel optical-signal processing and multi-channel optical recording are discussed. Researchers and graduate students in solid state physics and electrical engineering studying the properties and applications of such arrays will find this book invaluable.

  9. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Yamaoka, Neil; Kim, Charles

    2005-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multisensor systems. Irma was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapon application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel and the doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Since 2000, additional capabilities and enhancements have been added to the ladar channel including polarization and speckle effect. Work is still ongoing to add time-jittering model to the ladar channel. A new user interface has been introduced to aid users in the mechanism of scene generation and running the Irma code. The user interface provides a canvas where a user can add and remove objects using mouse clicks to construct a scene. The scene can then be visualized to find the desired sensor position. The synthetic ladar

  10. Autonomous Multi-Sensor Coordination: The Science Goal Monitor

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Hess, Melissa; Jones, Jeremy

    2004-01-01

    Many dramatic earth phenomena are dynamic and coupled. In order to fully understand them, we need to obtain timely coordinated multi-sensor observations from widely dispersed instruments. Such a dynamic observing system must include the ability to Schedule flexibly and react autonomously to sciencehser driven events; Understand higher-level goals of a sciencehser defined campaign; Coordinate various space-based and ground-based resources/sensors effectively and efficiently to achieve goals. In order to capture transient events, such a 'sensor web' system must have an automated reactive capability built into its scientific operations. To do this, we must overcome a number of challenges inherent in infusing autonomy. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe dynamic phenomena. The SGM system enables users to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of data to identify occurrences of the key events previously specified by the scientisther. When an event occurs, the system autonomously coordinates the execution of the users' desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened. Our paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems' Aqua/Terra spacecrafts' MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image

  11. Optical study and ruthenizer (II) N3 dye-sensitized solar cell application of ZnO nanorod-arrays synthesized by combine two-step process

    NASA Astrophysics Data System (ADS)

    Parra, Mohammad Ramzan; Haque, Fozia Z.

    2015-10-01

    Highly dense ZnO nanorod-arrays were successfully synthesized with uniform c-axis growth by using combine two-step process: sol-gel spin coating followed by the aqueous solution growth method. Structural and optical properties of ZnO nanorod-arrays were investigated. The X-ray diffraction results revealed that ZnO nanorod arrays exhibit wurtzite hexagonal crystal structure with a dominant (002) peak with high crystallinity. Nanorods of 3-4 μm length and 500 nm diameter, with surface roughness ˜20 nm were observed. Furthermore, Raman spectroscopic results revealed the presence of E 2 peak ˜438 cm-1 which again corroborated the existence of wurtzite crystal structures assigned to ZnO. The optical transmittance spectrum indicated that the transmittance of more than 80% was observed in the visible and infrared (IR) regions with the optical band-gap energy ˜3.35 eV. Photoluminescence spectrum showed peaks in ultra-violet (382.0 nm) and green region (524.9 nm), which specified good-quality crystallite formation containing high density of surface defects, zinc interstitials and oxygen-vacancies. Ruthenizer (II) N3-dye loaded sensitized solar cell test illustrated that the uniform ZnO nanorod-arrays as working electrode with a short circuit current density of 3.99 mA/cm2, fill factor ˜50% and overall power conversion efficiency (η) ˜1.36% might be a promising electrode material of dye sensitized solar cell application.

  12. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  13. Silicon Nanowire Array Solar Cell Prepared by Metal-Induced Electroless Etching with a Novel Processing Technology

    NASA Astrophysics Data System (ADS)

    Han-Don Um,; Jin-Young Jung,; Hong-Seok Seo,; Kwang-Tae Park,; Sang-Won Jee,; S. A. Moiz,; Jung-Ho Lee,

    2010-04-01

    We inexpensively fabricated vertically aligned Si nanowire solar cells using metal-induced electroless etching and a novel doping technique. Co-doping of boron and phosphorus was achieved using a spin-on-doping method for the simultaneous formation of a front-side emitter and a back surface field in a one-step thermal cycle. Nickel electroless deposition was also performed in order to form a continuous metal grid electrode on top of an array of vertically aligned Si nanowires. A highly dense array of Si nanowires with low reflectivity was obtained using Ag nanoparticles of optimal size (60-90 nm). We also obtained an open circuit voltage of 544 mV, a short circuit current of 14.68 mA/cm2, and a cell conversion efficiency of 5.25% at 1.5AM illumination. The improved photovoltaic performance was believed to be the result of the excellent optical absorption of the Si nanowires and the improved electrical properties of the electroless deposited electrode.

  14. New field programmable gate array-based image-oriented acquisition and real-time processing applied to plasma facing component thermal monitoring

    SciTech Connect

    Martin, V.; Dunand, G.; Moncada, V.; Jouve, M.; Travere, J.-M.

    2010-10-15

    During operation of present fusion devices, the plasma facing components (PFCs) are exposed to high heat fluxes. Understanding and preventing overheating of these components during long pulse discharges is a crucial safety issue for future devices like ITER. Infrared digital cameras interfaced with complex optical systems have become a routine diagnostic to measure surface temperatures in many magnetic fusion devices. Due to the complexity of the observed scenes and the large amount of data produced, the use of high computational performance hardware for real-time image processing is then mandatory to avoid PFC damages. At Tore Supra, we have recently made a major upgrade of our real-time infrared image acquisition and processing board by the use of a new field programmable gate array (FPGA) optimized for image processing. This paper describes the new possibilities offered by this board in terms of image calibration and image interpretation (abnormal thermal events detection) compared to the previous system.

  15. Process development for automated solar cell and module production. Task 4. Automated array assembly. Quarterly report No. 1

    SciTech Connect

    Hagerty, J. J.

    1980-10-15

    Work has been divided into five phases. The first phase is to modify existing hardware and controlling computer software to: (1) improve cell-to-cell placement accuracy, (2) improve the solder joint while reducing the amount of solder and flux smear on the cell's surface, and (3) reduce the system cycle time to 10 seconds. The second phase involves expanding the existing system's capabilities to be able to reject broken cells and make post-solder electrical tests. Phase 3 involves developing new hardware to allow for the automated encapsulation of solar modules. This involves three discrete pieces of hardware: (1) a vacuum platen end effector for the robot which allows it to pick up the 1' x 4' array of 35 inter-connected cells. With this, it can also pick up the cover glass and completed module, (2) a lamination preparation station which cuts the various encapsulation components from roll storage and positions them for encapsulation, and (3) an automated encapsulation chamber which interfaces with the above two and applies the heat and vacuum to cure the encapsulants. Phase 4 involves the final assembly of the encapsulated array into a framed, edge-sealed module completed for installation. For this we are using MBA's Glass Reinforced Concrete (GRC) in panels such as those developed by MBA for JPL under contract No. 955281. The GRC panel plays the multiple role of edge frame, substrate and mounting structure. An automated method of applying the edge seal will also be developed. The final phase (5) is the fabrication of six 1' x 4' electrically active solar modules using the above developed equipment. Progress is reported. (WHK)

  16. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    The energy consumed in manufacturing silicon solar cell modules was calculated for the current process, as well as for 1982 and 1986 projected processes. In addition, energy payback times for the above three sequences are shown. The module manufacturing energy was partitioned two ways. In one way, the silicon reduction, silicon purification, sheet formation, cell fabrication, and encapsulation energies were found. In addition, the facility, equipment, processing material and direct material lost-in-process energies were appropriated in junction formation processes and full module manufacturing sequences. A brief methodology accounting for the energy of silicon wafers lost-in-processing during cell manufacturing is described.

  17. Interagency arraying

    NASA Astrophysics Data System (ADS)

    Cox, Henry G.

    Activities performed to match ground aperture requirements for the Neptune encounter in August 1989 with the expected capabilities of the JPL Deep Space Network (DSN) are discussed. Ground aperture requirements, DSN capabilities, and the capabilities of other agencies are reviewed. The design and configurations of the receiver subsystem, combiner subsystem, monitor and control subsystem, recording subsystem, and supporting subsystems are described. The implementation of the Very Large Array-Goldstone Telemetry Array is discussed, and the differences involved with the Parkes-Canberra Telemetry Array implementation are highlighted. The operational concept is addressed.

  18. Simulation, fabrication and characterization of a 3.3 V flash ZE 2PROM array implemented in a 0.8 μm CMOS process

    NASA Astrophysics Data System (ADS)

    Ranaweera, J.; Ng, W. T.; Salama, C. A. T.

    1999-02-01

    This paper describes a Zener based flash memory cell (ZE 2PROM), programmed from hot electrons generated by a heavily doped reverse biased p +n + junction attached to the drain. The cell can be implemented in a NOR type memory array. It uses an orthogonal write technique to achieve fast programming with low power dissipation and reduced drain disturbance. The modeling of the charge transfer behavior of the flash ZE 2PROM cell is also done to describe the charging and discharging of the floating gate during programming and erasing. The flash ZE 2PROM arrays were implemented in a 0.8 μm lithography CMOS process flow in which the n-LDD step was replaced with a one sided p + boron implant with a doping level of ˜10 19 cm -3. This minor change to a standard CMOS process, makes the concept highly attractive for embedded memory applications. A programming time of 850 ns at 3.3 V supply was achieved on fabricated test devices.

  19. One-Step and Templateless Electropolymerization Process Using Thienothiophene Derivatives To Develop Arrays of Nanotubes and Tree-like Structures with High Water Adhesion.

    PubMed

    Ramos Chagas, Gabriela; Darmanin, Thierry; Guittard, Frédéric

    2016-08-31

    Here, we report for the first time the possibility to obtain not only arrays of nanotubes but also tree-like structures with high water adhesion using a one-step and templateless electropolymerization process. Using thienothiophene derivatives, particularly thieno[2,3-b]thiophene (Thienothiophene-1) and thieno[3,2-b]thiophene (Thienothiophene-2), we demonstrate this surface fabrication in organic solvent (dichloromethane) and without any surfactants. The formation of nanotubes is due to the stabilization by the polymer of gas bubbles produced in situ during electropolymerization process, and we show that the water content plays an important role in the formation of gas bubbles even if it is not the unique parameter. Using cyclic voltammetry as an electropolymerization method, the amount of released gas is more significant, but at constant potential it is much easier to control the nanotube formation. It is also possible to obtain arrays of tree-like structures when electropolymerizing with high deposition charges, and the resulting surfaces have high θw with extremely high water adhesion even if the polymers are intrinsically hydrophilic (θ(Y)w ≈ 70°). This work is extremely important for potential applications in water transportation and harvesting, oil/water separation membranes, energy systems, and biosensing. PMID:27509408

  20. CdS and CdS/CdSe sensitized ZnO nanorod array solar cells prepared by a solution ions exchange process

    SciTech Connect

    Chen, Ling; Gong, Haibo; Zheng, Xiaopeng; Zhu, Min; Zhang, Jun; Yang, Shikuan; Cao, Bingqiang

    2013-10-15

    Graphical abstract: - Highlights: • CdS and CdS/CdSe quantum dots are assembled on ZnO nanorods by ion exchange process. • The CdS/CdSe sensitization of ZnO effectively extends the absorption spectrum. • The performance of ZnO/CdS/CdSe cell is improved by extending absorption spectrum. - Abstract: In this paper, cadmium sulfide (CdS) and cadmium sulfide/cadmium selenide (CdS/CdSe) quantum dots (QDs) are assembled onto ZnO nanorod arrays by a solution ion exchange process for QD-sensitized solar cell application. The morphology, composition and absorption properties of different photoanodes were characterized with scanning electron microscope, transmission electron microscope, energy-dispersive X-ray spectrum and Raman spectrum in detail. It is shown that conformal and uniform CdS and CdS/CdSe shells can grow on ZnO nanorod cores. Quantum dot sensitized solar cells based on ZnO/CdS and ZnO/CdS/CdSe nanocable arrays were assembled with gold counter electrode and polysulfide electrolyte solution. The CdS/CdSe sensitization of ZnO can effectively extend the absorption spectrum up to 650 nm, which has a remarkable impact on the performance of a photovoltaic device by extending the absorption spectrum. Preliminary results show one fourth improvement in solar cell efficiency.

  1. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2006-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and

  2. A Vision for an International Multi-Sensor Snow Observing Mission

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Discussions within the international snow remote sensing community over the past two years have led to encouraging consensus regarding the broad outlines of a dedicated snow observing mission. The primary consensus - that since no single sensor type is satisfactory across all snow types and across all confounding factors, a multi-sensor approach is required - naturally leads to questions about the exact mix of sensors, required accuracies, and so on. In short, the natural next step is to collect such multi-sensor snow observations (with detailed ground truth) to enable trade studies of various possible mission concepts. Such trade studies must assess the strengths and limitations of heritage as well as newer measurement techniques with an eye toward natural sensitivity to desired parameters such as snow depth and/or snow water equivalent (SWE) in spite of confounding factors like clouds, lack of solar illumination, forest cover, and topography, measurement accuracy, temporal and spatial coverage, technological maturity, and cost.

  3. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  4. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  5. Airborne Multisensor Pod System, Arms control and nonproliferation technologies: Second quarter 1995

    SciTech Connect

    Alonzo, G M; Sanford, N M

    1995-01-01

    This issue focuses on the Airborne Multisensor Pod System (AMPS) which is a collaboration of many of the DOE national laboratories to provide a scientific environment to research multiple sensors and the new information that can be derived from them. The bulk of the research has been directed at nonproliferation applications, but it has also proven useful in environmental monitoring and assessment, and land/water management. The contents of this issue are: using AMPS technology to detect proliferation and monitor resources; combining multisensor data to monitor facilities and natural resources; planning a AMPS mission; SAR pod produces images day or night, rain or shine; MSI pod combines data from multiple sensors; ESI pod will analyze emissions and effluents; and accessing AMPS information on the Internet.

  6. Enthalpy arrays

    NASA Astrophysics Data System (ADS)

    Torres, Francisco E.; Kuhn, Peter; de Bruyker, Dirk; Bell, Alan G.; Wolkin, Michal V.; Peeters, Eric; Williamson, James R.; Anderson, Gregory B.; Schmitz, Gregory P.; Recht, Michael I.; Schweizer, Sandra; Scott, Lincoln G.; Ho, Jackson H.; Elrod, Scott A.; Schultz, Peter G.; Lerner, Richard A.; Bruce, Richard H.

    2004-06-01

    We report the fabrication of enthalpy arrays and their use to detect molecular interactions, including protein-ligand binding, enzymatic turnover, and mitochondrial respiration. Enthalpy arrays provide a universal assay methodology with no need for specific assay development such as fluorescent labeling or immobilization of reagents, which can adversely affect the interaction. Microscale technology enables the fabrication of 96-detector enthalpy arrays on large substrates. The reduction in scale results in large decreases in both the sample quantity and the measurement time compared with conventional microcalorimetry. We demonstrate the utility of the enthalpy arrays by showing measurements for two protein-ligand binding interactions (RNase A + cytidine 2'-monophosphate and streptavidin + biotin), phosphorylation of glucose by hexokinase, and respiration of mitochondria in the presence of 2,4-dinitrophenol uncoupler.

  7. Micro-dent arrays fabricated by a novel net mask laser shock processing on the surface of LY2 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Dai, Feng-Ze; Lu, Jin-Zhong; Zhang, Yong-Kang; Luo, Kai-Yu; Zhang, Lei; Wang, Qing-Wei; Ren, Xu-Dong; Li, Pin

    2012-07-01

    A novel technology called net-mask laser shock processing (NMLSP) was introduced to fabricate micro-dent arrays on the surface of LY2 aluminum alloy. Experimental results showed that the as-fabricated micro-dents whose diameter and depth were about 230-250 μm and 9.3 μm, respectively, was closed to be circular although the original shape of the net mask was square. The height of upwarped area around micro-dent was about 4 μm. Moreover, the interference of neighboring surface shock waves would affect the topography of micro-dents. A dynamic analysis performed by ABAQUS/Explicit code exhibited that the dynamic formation process of micro-dents fabricated by NMLSP, and the simulation results were mostly consistent with experiment results.

  8. Biological application of micro-electro mechanical systems microelectrode array sensors for direct measurement of phosphate in the enhanced biological phosphorous removal process.

    PubMed

    Lee, Woo Hyoung; Lee, Jin-Hwan; Bishop, Paul L; Papautsky, Ian

    2009-08-01

    The determination of phosphate has been of great importance in the fields of clinical, environmental, and horticultural analysis for over three decades. New cobalt-based micro-electro mechanical systems (MEMS) microelectrode array (MEA) sensors for direct measurement of phosphate in small environmental samples, such as microbial aggregates, has been introduced and applied here for in situ measurement of phosphate within activated sludge flocs in the enhanced biological phosphorus removal process. The MEMS technologies offer the advantages of accurate fabrication methods, reduced complexity of the fabrication process, mass production, low cost, and increased reliability. Well-defined phosphate profiles across the flocs were observed under anaerobic conditions, during which, phosphate was released from the flocs, using the MEMS MEA sensor. The microprofiles were compared with the microprofiles measured using conventional phosphate microelectrodes. The developed MEMS MEA sensors were useful tools for the in situ measurement of phosphate in small aggregates. PMID:19774851

  9. Array tomography: production of arrays.

    PubMed

    Micheva, Kristina D; O'Rourke, Nancy; Busse, Brad; Smith, Stephen J

    2010-11-01

    Array tomography is a volumetric microscopy method based on physical serial sectioning. Ultrathin sections of a plastic-embedded tissue are cut using an ultramicrotome, bonded in an ordered array to a glass coverslip, stained as desired, and imaged. The resulting two-dimensional image tiles can then be reconstructed computationally into three-dimensional volume images for visualization and quantitative analysis. The minimal thickness of individual sections permits high-quality rapid staining and imaging, whereas the array format allows reliable and convenient section handling, staining, and automated imaging. Also, the physical stability of the arrays permits images to be acquired and registered from repeated cycles of staining, imaging, and stain elution, as well as from imaging using multiple modalities (e.g., fluorescence and electron microscopy). Array tomography makes it possible to visualize and quantify previously inaccessible features of tissue structure and molecular architecture. However, careful preparation of the tissue is essential for successful array tomography; these steps can be time consuming and require some practice to perfect. This protocol describes the sectioning of embedded tissues and the mounting of the serial arrays. The procedures require some familiarity with the techniques used for ultramicrotome sectioning for electron microscopy. PMID:21041397

  10. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential

  11. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  12. Multi-sensor for measuring erythemally weighted irradiance in various directions simultaneously

    NASA Astrophysics Data System (ADS)

    Appelbaum, J.; Peleg, I.; Peled, A.

    2015-08-01

    Estimating the ultraviolet-B (UV-B) solar irradiance and its angular distribution is a matter of interest to both research and commercial institutes. A static multi-sensor instrument is developed in this paper for a simultaneous measuring of the sky and the reflected erythemally weighted UV-B irradiance on multiple inclined surfaces. The instrument employs a pre-developed simple solar irradiance model and a minimum mean square error method to estimate the various irradiance parameters. The multi-sensor instrument comprises a spherical shaped apparatus with the UV-B sensors mounted as follows: seven sky-facing sensors to measure the hemispherical sky irradiance and six sensors facing downwards to measure the reflection from ground. This work aims to devise and outline an elementary, low-cost multi-sensor instrument. The sensor may usefully serve research, commercial, and medical institutes to sample and measure the UV-B irradiance on horizontal as well as on inclined surfaces. The various UV-B calculations for inclined surfaces are aided by the sensor's integrated software.

  13. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  14. Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context

    NASA Astrophysics Data System (ADS)

    Boudaren, Mohamed El Yazid; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2012-12-01

    The latest developments in Markov models' theory and their corresponding computational techniques have opened new rooms for image and signal modeling. In particular, the use of Dempster-Shafer theory of evidence within Markov models has brought some keys to several challenging difficulties that the conventional hidden Markov models cannot handle. These difficulties are concerned mainly with two situations: multisensor data, where the use of the Dempster-Shafer fusion is unworkable; and nonstationary data, due to the mismatch between the estimated stationary model and the actual data. For each of the two situations, the Dempster-Shafer combination rule has been applied, thanks to the triplet Markov models' formalism, to overcome the drawbacks of the standard Bayesian models. However, so far, both situations have not been considered in the same time. In this article, we propose an evidential Markov chain that uses the Dempster-Shafer combination rule to bring the effect of contextual information into segmentation of multisensor nonstationary data. We also provide the Expectation-Maximization parameters' estimation and the maximum posterior marginal's restoration procedures. To validate the proposed model, experiments are conducted on some synthetic multisensor data and noised images. The obtained segmentation results are then compared to those obtained with conventional approaches to bring out the efficiency of the present model.

  15. GACEM: Genetic Algorithm Based Classifier Ensemble in a Multi-sensor System

    PubMed Central

    Xu, Rongwu; He, Lin

    2008-01-01

    Multi-sensor systems (MSS) have been increasingly applied in pattern classification while searching for the optimal classification framework is still an open problem. The development of the classifier ensemble seems to provide a promising solution. The classifier ensemble is a learning paradigm where many classifiers are jointly used to solve a problem, which has been proven an effective method for enhancing the classification ability. In this paper, by introducing the concept of Meta-feature (MF) and Trans-function (TF) for describing the relationship between the nature and the measurement of the observed phenomenon, classification in a multi-sensor system can be unified in the classifier ensemble framework. Then an approach called Genetic Algorithm based Classifier Ensemble in Multi-sensor system (GACEM) is presented, where a genetic algorithm is utilized for optimization of both the selection of features subset and the decision combination simultaneously. GACEM trains a number of classifiers based on different combinations of feature vectors at first and then selects the classifiers whose weight is higher than the pre-set threshold to make up the ensemble. An empirical study shows that, compared with the conventional feature-level voting and decision-level voting, not only can GACEM achieve better and more robust performance, but also simplify the system markedly.

  16. Multisensor systems for security of critical infrastructures: concept, data fusion, and experimental results

    NASA Astrophysics Data System (ADS)

    Kastek, M.; Dulski, R.; Życzkowski, M.; Szustakowski, M.; Ciurapiński, W.; Firmanty, K.; Pałka, N.; Bieszczad, G.

    2011-08-01

    The paper presents the concept of a multisensor system for perimeter protection, suitable for stationary and moving objects. The system consists of an active ground radar and thermal and visible cameras. The radar allows the system to locate potential intruders and controls an observation area for system cameras. The multi-sensor system concept ensures significant improvement of the probability of intruder detection and reduction of false alarms, thus increasing the functionality and performance of the whole system. Effective ranges of detection depend on the quality of the applied sensors and the observed scene itself. One of the most important devices used in such systems are IR cameras. The paper discusses the technical possibilities and limitations to use uncooled IR cameras in such a multi-sensor system for perimeter protection. The role of IR cameras in the system was discussed as well as a technical possibilities to detect a human being. The operational distances for perimeter protection are rather high, considering the performance of commercially available thermal cameras. The required spatial resolutions for detection, recognition and identification were calculated and then the detection ranges were estimated using NVTherm software. The results of analysis were finally presented and the comparison of exemplary IR cameras.

  17. Classification and Modelling of Urban Micro-Climates Using Multisensoral and Multitemporal Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Bechtel, B.; Langkamp, T.; Böhner, J.; Daneke, C.; Oßenbrügge, J.; Schempp, S.

    2012-07-01

    Remote sensing has widely been used in urban climatology since it has the advantage of a simultaneous synoptic view of the full urban surface. Methods include the analysis of surface temperature patterns, spatial (biophysical) indicators for urban heat island modelling, and flux measurements. Another approach is the automated classification of urban morphologies or structural types. In this study it was tested, whether Local Climate Zones (a new typology of thermally 'rather' homogenous urban morphologies) can be automatically classified from multisensor and multitemporal earth observation data. Therefore, a large number of parameters were derived from different datasets, including multitemporal Landsat data and morphological profiles as well as windowed multiband signatures from an airborne IFSAR-DHM. The results for Hamburg, Germany, show that different datasets have high potential for the differentiation of urban morphologies. Multitemporal thermal data performed very well with up to 96.3 % overall classification accuracy with a neuronal network classifier. The multispectral data reached 95.1 % and the morphological profiles 83.2 %.The multisensor feature sets reached up to 97.4 % with 100 selected features, but also small multisensoral feature sets reached good results. This shows that microclimatic meaningful urban structures can be classified from different remote sensing datasets. Further, the potential of the parameters for spatiotemporal modelling of the mean urban heat island was tested. Therefore, a comprehensive mobile measurement campaign with GPS loggers and temperature sensors on public buses was conducted in order to gain in situ data in high spatial and temporal resolution.

  18. Multi-sensor data fusion for measurement of complex freeform surfaces

    NASA Astrophysics Data System (ADS)

    Ren, M. J.; Liu, M. Y.; Cheung, C. F.; Yin, Y. H.

    2016-01-01

    Along with the rapid development of the science and technology in fields such as space optics, multi-scale enriched freeform surfaces are widely used to enhance the performance of the optical systems in both functionality and size reduction. Multi-sensor technology is considered as one of the promising methods to measure and characterize these surfaces at multiple scales. This paper presents a multi-sensor data fusion based measurement method to purposely extract the geometric information of the components with different scales which is used to establish a holistic geometry of the surface via data fusion. To address the key problems of multi-sensor data fusion, an intrinsic feature pattern based surface registration method is developed to transform the measured datasets to a common coordinate frame. Gaussian zero-order regression filter is then used to separate each measured data in different scales, and the datasets are fused based on an edge intensity data fusion algorithm within the same wavelength. The fused data at different scales is then merged to form a new surface with holistic multiscale information. Experimental study is presented to verify the effectiveness of the proposed method.

  19. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  20. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter

    PubMed Central

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-01-01

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low. PMID:26561817