Science.gov

Sample records for multisensor array processing

  1. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that

  2. Highly reliable multisensor array (MSA) smart transducers

    NASA Astrophysics Data System (ADS)

    Perotti, José; Lucena, Angel; Mackey, Paul; Mata, Carlos; Immer, Christopher

    2006-05-01

    Many developments in the field of multisensor array (MSA) transducers have taken place in the last few years. Advancements in fabrication technology, such as Micro-Electro-Mechanical Systems (MEMS) and nanotechnology, have made implementation of MSA devices a reality. NASA Kennedy Space Center (KSC) has been developing this type of technology because of the increases in safety, reliability, and performance and the reduction in operational and maintenance costs that can be achieved with these devices. To demonstrate the MSA technology benefits, KSC quantified the relationship between the number of sensors (N) and the associated improvement in sensor life and reliability. A software algorithm was developed to monitor and assess the health of each element and the overall MSA. Furthermore, the software algorithm implemented criteria on how these elements would contribute to the MSA-calculated output to ensure required performance. The hypothesis was that a greater number of statistically independent sensor elements would provide a measurable increase in measurement reliability. A computer simulation was created to answer this question. An array of N sensors underwent random failures in the simulation and a life extension factor (LEF equals the percentage of the life of a single sensor) was calculated by the program. When LEF was plotted as a function of N, a quasiexponential behavior was detected with marginal improvement above N = 30. The hypothesis and follow-on simulation results were then corroborated experimentally. An array composed of eight independent pressure sensors was fabricated. To accelerate sensor life cycle and failure and to simulate degradation over time, the MSA was exposed to an environmental tem-perature of 125°C. Every 24 hours, the experiment's environmental temperature was returned to ambient temperature (27°C), and the outputs of all the MSA sensor elements were measured. Once per week, the MSA calibration was verified at five different

  3. Hybrid integration process for the development of multisensor chips

    NASA Astrophysics Data System (ADS)

    Jin, Na; Liu, Weiguo

    A novel hybrid integration process had been developed for the integration of single crystal pyroelectric detector with readout IC based on a thinning and anisotropic conduction tape bonding technique. We report our recent progress in applying the hybrid integration process for the fabrication of a multisensor chip with thermal and sound detectors integrated. The sound detector in the multisensor chip is based on thinned single crystal quartz, while the thermal detector in the chip is making use of thinned PLZT ceramic wafer. A membrane transfer process (MTP) was applied for the thinning and integration of the single crystal and ceramic wafers.

  4. Robust site security using smart seismic array technology and multi-sensor data fusion

    NASA Astrophysics Data System (ADS)

    Hellickson, Dean; Richards, Paul; Reynolds, Zane; Keener, Joshua

    2010-04-01

    Traditional site security systems are susceptible to high individual sensor nuisance alarm rates that reduce the overall system effectiveness. Visual assessment of intrusions can be intensive and manually difficult as cameras are slewed by the system to non intrusion areas or as operators respond to nuisance alarms. Very little system intrusion performance data are available other than discrete sensor alarm indications that provide no real value. This paper discusses the system architecture, integration and display of a multi-sensor data fused system for wide area surveillance, local site intrusion detection and intrusion classification. The incorporation of a novel seismic array of smart sensors using FK Beamforming processing that greatly enhances the overall system detection and classification performance of the system is discussed. Recent test data demonstrates the performance of the seismic array within several different installations and its ability to classify and track moving targets at significant standoff distances with exceptional immunity to background clutter and noise. Multi-sensor data fusion is applied across a suite of complimentary sensors eliminating almost all nuisance alarms while integrating within a geographical information system to feed a visual-fusion display of the area being secured. Real-time sensor detection and intrusion classification data is presented within a visual-fusion display providing greatly enhanced situational awareness, system performance information and real-time assessment of intrusions and situations of interest with limited security operator involvement. This approach scales from a small local perimeter to very large geographical area and can be used across multiple sites controlled at a single command and control station.

  5. Breath analysis system for early detection of lung diseases based on multi-sensor array

    NASA Astrophysics Data System (ADS)

    Jeon, Jin-Young; Yu, Joon-Boo; Shin, Jeong-Suk; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Expiratory breath contains various VOCs(Volatile Organic Compounds) produced from the human. When a certain disease exists, the exhalation has specific VOCs which may be generated from diseases. Many researchers have been actively working to find different types of biomarkers which are characteristic for particular diseases. Research regarding the identification of specific diseases from exhalation is still in progress. The aim of this research is to implement early detection of lung disease such as lung cancer and COPD(Chronic Obstructive Pulmonary Disease), which was nominated on the 6th of domestic death rate in 2010, based on multi-sensor array system. The system has been used to acquire sampled expiratory gases data and PCA(Principle Component Analysis) technique was applied to analyze signals from multi-sensor array. Throughout the experimental trials, a clearly distinguishable difference between lung disease patients and healthy controls was found from the measurement and analysis of their respective expiratory gases.

  6. Could We Apply a NeuroProcessor For Analyzing a Gas Response Of Multisensor Arrays?

    SciTech Connect

    Sysoev, V. V.; Musatov, V. Yu.; Maschenko, A. A.; Varegnikov, A. S.; Chrizostomov, A. A.; Kiselev, I.; Schneider, T.; Bruns, M.; Sommer, M.

    2009-05-23

    We describe an effort of implementation of hardware neuroprocessor to carry out pattern recognition of signals generated by a multisensor microarray of Electronic Nose type. The multisensor microarray is designed with the SnO{sub 2} thin film segmented by co-planar electrodes according to KAMINA (KArlsruhe Micro NAse) E-nose architecture. The response of this microarray to reducing gases mixtured with a synthetic air is processed by principal component analysis technique realized in PC (Matlab software) and the neural microprocessor NeuroMatrix NM6403. It is shown that the neuroprocessor is able to successfully carry out a gas-recognition algorithm at a real-time scale.

  7. AprilTag array-aided extrinsic calibration of camera-laser multi-sensor system.

    PubMed

    Tang, Dengqing; Hu, Tianjiang; Shen, Lincheng; Ma, Zhaowei; Pan, Congyu

    This paper presents a new algorithm for extrinsically calibrating a multi-sensor system including multiple cameras and a 2D laser scanner. On the basis of the camera pose estimation using AprilTag, we design an AprilTag array as the calibration target and employ a nonlinear optimization to calculate the single-camera extrinsic parameters when multiple tags are in the field of view of the camera. The extrinsic parameters of camera-camera and laser-camera are then calibrated, respectively. A global optimization is finally used to refine all the extrinsic parameters by minimizing a re-projection error. This algorithm is adapted to the extrinsic calibration of multiple cameras even if there is non-overlapping field of view. For algorithm validation, we have built a micro-aerial vehicle platform with multi-sensor system to collect real data, and the experiment results confirmed that the proposed algorithm yields great performance.

  8. A radiosonde using a humidity sensor array with a platinum resistance heater and multi-sensor data fusion.

    PubMed

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-07-12

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes.

  9. A Radiosonde Using a Humidity Sensor Array with a Platinum Resistance Heater and Multi-Sensor Data Fusion

    PubMed Central

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  10. Optical sensors and multisensor arrays containing thin film electroluminescent devices

    DOEpatents

    Aylott, Jonathan W.; Chen-Esterlit, Zoe; Friedl, Jon H.; Kopelman, Raoul; Savvateev, Vadim N.; Shinar, Joseph

    2001-12-18

    Optical sensor, probe and array devices for detecting chemical biological, and physical analytes. The devices include an analyte-sensitive layer optically coupled to a thin film electroluminescent layer which activates the analyte-sensitive layer to provide an optical response. The optical response varies depending upon the presence of an analyte and is detected by a photodetector and analyzed to determine the properties of the analyte.

  11. Multi-sensor Array for High Altitude Balloon Missions to the Stratosphere

    NASA Astrophysics Data System (ADS)

    Davis, Tim; McClurg, Bryce; Sohl, John

    2008-10-01

    We have designed and built a microprocessor controlled and expandable multi-sensor array for data collection on near space missions. Weber State University has started a high altitude research balloon program called HARBOR. This array has been designed to data log a base set of measurements for every flight and has room for six guest instruments. The base measurements are absolute pressure, on-board temperature, 3-axis accelerometer for attitude measurement, and 2-axis compensated magnetic compass. The system also contains a real time clock and circuitry for logging data directly to a USB memory stick. In typical operation the measurements will be cycled through in sequence and saved to the memory stick along with the clock's time stamp. The microprocessor can be reprogrammed to adapt to guest experiments with either analog or digital interfacing. This system will fly with every mission and will provide backup data collection for other instrumentation for which the primary task is measuring atmospheric pressure and temperature. The attitude data will be used to determine the orientation of the onboard camera systems to aid in identifying features in the images. This will make these images easier to use for any future GIS (geographic information system) remote sensing missions.

  12. Array signal processing

    SciTech Connect

    Haykin, S.; Justice, J.H.; Owsley, N.L.; Yen, J.L.; Kak, A.C.

    1985-01-01

    This is the first book to be devoted completely to array signal processing, a subject that has become increasingly important in recent years. The book consists of six chapters. Chapter 1, which is introductory, reviews some basic concepts in wave propagation. The remaining five chapters deal with the theory and applications of array signal processing in (a) exploration seismology, (b) passive sonar, (c) radar, (d) radio astronomy, and (e) tomographic imaging. The various chapters of the book are self-contained. The book is written by a team of five active researchers, who are specialists in the individual fields covered by the pertinent chapters.

  13. Identification and quantification of individual volatile organic compounds in a binary mixture by SAW multisensor array and pattern recognition analysis

    NASA Astrophysics Data System (ADS)

    Penza, M.; Cassano, G.; Tortorella, F.

    2002-06-01

    We have developed a surface acoustic wave (SAW) multisensor array with five acoustic sensing elements configured as two-port resonator 433.92 MHz oscillators and a reference SAW element to recognize different individual components and determine their concentrations in a binary mixture of volatile organic compounds (VOCs) such as methanol and acetone, in the ranges 15-130 and 50-250 ppm, respectively. The SAW sensors have been specifically coated by various sensing thin films such as arachidic acid, carbowax, behenic acid, triethanolamine or acrylated polysiloxane, operating at room temperature. By using the relative frequency change as the output signal of the SAW multisensor array with an artificial neural network (ANN), a recognition system has been realized for the identification and quantification of tested VOCs. The features of the SAW multisensor array exposed to a binary component organic mixture of methanol and acetone have been extracted from the output signals of five SAW sensors by pattern recognition (PARC) techniques, such as principal component analysis (PCA). An organic vapour pattern classifier has been implemented by using a multilayer neural network with a backpropagation learning algorithm. The normalized responses of a reduced set of SAW sensors or selected principal components scores have been used as inputs for a feed-forward multilayer perceptron (MLP), resulting in a 70% correct recognition rate with the normalized responses of the four SAW sensors and in an enhanced 80% correct recognition rate with the first two principal components of the original data consisting of the normalized responses of the four SAW sensors. The prediction of the individual vapour concentrations has been tackled with PCA for features extraction and by using the first two principal components scores as inputs to a feed-forward MLP consisting of a gating network, which decides which of three specific subnets should be used to determine the output concentration: the

  14. Cellular Array Processing Simulation

    NASA Astrophysics Data System (ADS)

    Lee, Harry C.; Preston, Earl W.

    1981-11-01

    The Cellular Array Processing Simulation (CAPS) system is a high-level image language that runs on a multiprocessor configuration. CAPS is interpretively decoded on a conventional minicomputer with all image operation instructions executed on an array processor. The synergistic environment that exists between the minicomputer and the array processor gives CAPS its high-speed throughput, while maintaining a convenient conversational user language. CAPS was designed to be both modular and table driven so that it can be easily maintained and modified. CAPS uses the image convolution operator as one of its primitives and performs this cellular operation by decomposing it into parallel image steps that are scheduled to be executed on the array processor. Among its features is the ability to observe the imagery in real time as a user's algorithm is executed. This feature reduces the need for image storage space, since it is feasible to retain only original images and produce resultant images when needed. CAPS also contains a language processor that permits users to develop re-entrant image processing subroutines or algorithms.

  15. Atomic Magnetometer Multisensor Array for rf Interference Mitigation and Unshielded Detection of Nuclear Quadrupole Resonance

    NASA Astrophysics Data System (ADS)

    Cooper, Robert J.; Prescott, David W.; Matz, Peter; Sauer, Karen L.; Dural, Nezih; Romalis, Michael V.; Foley, Elizabeth L.; Kornack, Thomas W.; Monti, Mark; Okamitsu, Jeffrey

    2016-12-01

    An array of four 87Rb vector magnetometers is used to detect nuclear quadrupole resonance signals in an unshielded environment at 1 MHz. With a baseline of 25 cm, the length of the array, radio-frequency interference mitigation is also demonstrated; a radio-station signal is suppressed by a factor of 20 without degradation to the signal of interest. With these compact sensors, in which the probe beam passes through twice, the fundamental limit to detection sensitivity is found to be photon-shot noise. More passes of the probe beam overcome this limitation. With a sensor of similar effective volume, 0.25 cm3 , but 25 × more passes, the sensitivity is improved by an order of magnitude to 1.7 ±0.2 fT /√{Hz } .

  16. Metal oxide based multisensor array and portable database for field analysis of antioxidants

    PubMed Central

    Sharpe, Erica; Bradley, Ryan; Frasco, Thalia; Jayathilaka, Dilhani; Marsh, Amanda; Andreescu, Silvana

    2014-01-01

    We report a novel chemical sensing array based on metal oxide nanoparticles as a portable and inexpensive paper-based colorimetric method for polyphenol detection and field characterization of antioxidant containing samples. Multiple metal oxide nanoparticles with various polyphenol binding properties were used as active sensing materials to develop the sensor array and establish a database of polyphenol standards that include epigallocatechin gallate, gallic acid, resveratrol, and Trolox among others. Unique charge-transfer complexes are formed between each polyphenol and each metal oxide on the surface of individual sensors in the array, creating distinct optically detectable signals which have been quantified and logged into a reference database for polyphenol identification. The field-portable Pantone/X-Rite© CapSure® color reader was used to create this database and to facilitate rapid colorimetric analysis. The use of multiple metal-oxide sensors allows for cross-validation of results and increases accuracy of analysis. The database has enabled successful identification and quantification of antioxidant constituents within real botanical extractions including green tea. Formation of charge-transfer complexes is also correlated with antioxidant activity exhibiting electron transfer capabilities of each polyphenol. The antioxidant activity of each sample was calculated and validated against the oxygen radical absorbance capacity (ORAC) assay showing good comparability. The results indicate that this method can be successfully used for a more comprehensive analysis of antioxidant containing samples as compared to conventional methods. This technology can greatly simplify investigations into plant phenolics and make possible the on-site determination of antioxidant composition and activity in remote locations. PMID:24610993

  17. Integrated multisensor navigation systems

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank

    1988-01-01

    The multisensor navigation systems research evolved from the availability of several stand alone navigation systems and the growing concern for aircraft navigation reliability and safety. The intent is to develop a multisensor navigation system during the next decade that will be capable of providing reliable aircraft position data. These data will then be transmitted directly, or by satellite, to surveillance centers to aid the process of air traffic flow control. In order to satisfy the requirements for such a system, the following issues need to be examined: performance, coverage, reliability, availability, and integrity. The presence of a multisensor navigation system in all aircraft will improve safety for the aviation community and allow for more economical operation.

  18. Multi-sensor fusion development

    NASA Astrophysics Data System (ADS)

    Bish, Sheldon; Rohrer, Matthew; Scheffel, Peter; Bennett, Kelly

    2016-05-01

    The U.S. Army Research Laboratory (ARL) and McQ Inc. are developing a generic sensor fusion architecture that involves several diverse processes working in combination to create a dynamic task-oriented, real-time informational capability. Processes include sensor data collection, persistent and observational data storage, and multimodal and multisensor fusion that includes the flexibility to modify the fusion program rules for each mission. Such a fusion engine lends itself to a diverse set of sensing applications and architectures while using open-source software technologies. In this paper, we describe a fusion engine architecture that combines multimodal and multi-sensor fusion within an Open Standard for Unattended Sensors (OSUS) framework. The modular, plug-and-play architecture of OSUS allows future fusion plugin methodologies to have seamless integration into the fusion architecture at the conceptual and implementation level. Although beyond the scope of this paper, this architecture allows for data and information manipulation and filtering for an array of applications.

  19. Acoustic signal processing toolbox for array processing

    NASA Astrophysics Data System (ADS)

    Pham, Tien; Whipps, Gene T.

    2003-08-01

    The US Army Research Laboratory (ARL) has developed an acoustic signal processing toolbox (ASPT) for acoustic sensor array processing. The intent of this document is to describe the toolbox and its uses. The ASPT is a GUI-based software that is developed and runs under MATLAB. The current version, ASPT 3.0, requires MATLAB 6.0 and above. ASPT contains a variety of narrowband (NB) and incoherent and coherent wideband (WB) direction-of-arrival (DOA) estimation and beamforming algorithms that have been researched and developed at ARL. Currently, ASPT contains 16 DOA and beamforming algorithms. It contains several different NB and WB versions of the MVDR, MUSIC and ESPRIT algorithms. In addition, there are a variety of pre-processing, simulation and analysis tools available in the toolbox. The user can perform simulation or real data analysis for all algorithms with user-defined signal model parameters and array geometries.

  20. Multi-sensor magnetoencephalography with atomic magnetometers

    PubMed Central

    Johnson, Cort N; Schwindt, P D D; Weisend, M

    2014-01-01

    The authors have detected magnetic fields from the human brain with two independent, simultaneously operating rubidium spin-exchange-relaxation-free magnetometers. Evoked responses from auditory stimulation were recorded from multiple subjects with two multi-channel magnetometers located on opposite sides of the head. Signal processing techniques enabled by multi-channel measurements were used to improve signal quality. This is the first demonstration of multi-sensor atomic magnetometer magnetoencephalography and provides a framework for developing a non-cryogenic, whole-head magnetoencephalography array for source localization. PMID:23939051

  1. Towards operational multisensor registration

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.

    1991-01-01

    To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.

  2. Airborne multisensor pod system (AMPS) data: Multispectral data integration and processing hints

    SciTech Connect

    Leary, T.J.; Lamb, A.

    1996-11-01

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airborne Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.

  3. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  4. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  5. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  6. The Applicability of Incoherent Array Processing to IMS Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.

    2014-03-01

    The seismic arrays of the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) are highly diverse in size and configuration, with apertures ranging from under 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high-frequency phases lacking coherence between sensors. Pipeline detection algorithms often miss such phases, since they only consider frequencies low enough to allow coherent array processing, and phases that are detected are often attributed qualitatively incorrect backazimuth and slowness estimates. This can result in missed events, due to either a lack of contributing phases or by corruption of event hypotheses by spurious detections. It has been demonstrated previously that continuous spectral estimation can both detect and estimate phases on the largest aperture arrays, with arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity, as is the case for classical f-k analysis, and the ability to estimate slowness vectors requires sufficiently large inter-sensor distances to resolve time-delays between pulses with a period of the order 4-5 s. Spectrogram beampacking works well on five IMS arrays with apertures over 20 km (NOA, AKASG, YKA, WRA, and KURK) without additional post-processing. Seven arrays with 10-20 km aperture (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 s period signal. Even for medium aperture arrays which can provide high-quality coherent slowness estimates, a complementary spectrogram beampacking procedure could act as a quality control by providing non-aliased estimates when the coherent slowness grids display

  7. Characterizing the Propagation of Uterine Electrophysiological Signals Recorded with a Multi-Sensor Abdominal Array in Term Pregnancies.

    PubMed

    Escalona-Vargas, Diana; Govindan, Rathinaswamy B; Furdea, Adrian; Murphy, Pam; Lowery, Curtis L; Eswaran, Hari

    2015-01-01

    The objective of this study was to quantify the number of segments that have contractile activity and determine the propagation speed from uterine electrophysiological signals recorded over the abdomen. The uterine magnetomyographic (MMG) signals were recorded with a 151 channel SARA (SQUID Array for Reproductive Assessment) system from 36 pregnant women between 37 and 40 weeks of gestational age. The MMG signals were scored and segments were classified based on presence of uterine contractile burst activity. The sensor space was then split into four quadrants and in each quadrant signal strength at each sample was calculated using center-of-gravity (COG). To this end, the cross-correlation analysis of the COG was performed to calculate the delay between pairwise combinations of quadrants. The relationship in propagation across the quadrants was quantified and propagation speeds were calculated from the delays. MMG recordings were successfully processed from 25 subjects and the average values of propagation speeds ranged from 1.3-9.5 cm/s, which was within the physiological range. The propagation was observed between both vertical and horizontal quadrants confirming multidirectional propagation. After the multiple pairwise test (99% CI), significant differences in speeds can be observed between certain vertical or horizontal combinations and the crossed pair combinations. The number of segments containing contractile activity in any given quadrant pair with a detectable delay was significantly higher in the lower abdominal pairwise combination as compared to all others. The quadrant-based approach using MMG signals provided us with high spatial-temporal information of the uterine contractile activity and will help us in the future to optimize abdominal electromyographic (EMG) recordings that are practical in a clinical setting.

  8. Optical signal processing of phased array radar

    NASA Astrophysics Data System (ADS)

    Weverka, Robert T.

    This thesis develops optical processors that scale to very high processing speed. Optical signal processing is often promoted on the basis of smaller size, lower weight and lower power consumption as well as higher signal processing speed. While each of these requirements has applications, it is the ones that require processing speed beyond that available in electronics that are most compelling. Thirty years ago, optical processing was the only method fast enough to process Synthetic Aperture Radar (SAR), one of the more demanding signal processing tasks at this time. Since that time electronic processing speed has improved sufficiently to tackle that problem. We have sought out the problems that require significantly higher processing speed and developed optical processors that tackle these more difficult problems. The components that contribute to high signal processing speed are high input signal bandwidth, a large number of parallel input channels each with this high bandwidth, and a large number of parallel operations required on each input channel. Adaptive signal processing for phased array radar has all of these factors. The processors developed for this task scale well in three dimensions, which allows them to maximize parallelism for high speed. This thesis explores an example of a negative feedback adaptive phased array processor and an example of a positive feedback phased array processor. The negative feedback processor uses and array of inputs in up to two dimensions together with the time history of the signal in the third dimension to adapt the array pattern to null out incoming jammer signals. The positive feedback processor uses the incoming signals and assumptions about the radar scene to correct for position errors in a phased array. Discovery and analysis of these new processors are facilitated by an original volume holographic analysis technique developed in the thesis. The thesis includes a new acoustooptic Bragg cell geometry developed with

  9. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  10. Barrow real-time sea ice mass balance data: ingestion, processing, dissemination and archival of multi-sensor data

    NASA Astrophysics Data System (ADS)

    Grimes, J.; Mahoney, A. R.; Heinrichs, T. A.; Eicken, H.

    2012-12-01

    Sensor data can be highly variable in nature and also varied depending on the physical quantity being observed, sensor hardware and sampling parameters. The sea ice mass balance site (MBS) operated in Barrow by the University of Alaska Fairbanks (http://seaice.alaska.edu/gi/observatories/barrow_sealevel) is a multisensor platform consisting of a thermistor string, air and water temperature sensors, acoustic altimeters above and below the ice and a humidity sensor. Each sensor has a unique specification and configuration. The data from multiple sensors are combined to generate sea ice data products. For example, ice thickness is calculated from the positions of the upper and lower ice surfaces, which are determined using data from downward-looking and upward-looking acoustic altimeters above and below the ice, respectively. As a data clearinghouse, the Geographic Information Network of Alaska (GINA) processes real time data from many sources, including the Barrow MBS. Doing so requires a system that is easy to use, yet also offers the flexibility to handle data from multisensor observing platforms. In the case of the Barrow MBS, the metadata system needs to accommodate the addition of new and retirement of old sensors from year to year as well as instrument configuration changes caused by, for example, spring melt or inquisitive polar bears. We also require ease of use for both administrators and end users. Here we present the data and processing steps of using sensor data system powered by the NoSQL storage engine, MongoDB. The system has been developed to ingest, process, disseminate and archive data from the Barrow MBS. Storing sensor data in a generalized format, from many different sources, is a challenging task, especially for traditional SQL databases with a set schema. MongoDB is a NoSQL (not only SQL) database that does not require a fixed schema. There are several advantages using this model over the traditional relational database management system (RDBMS

  11. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, P.H.; Hunt, A.J.

    1985-09-04

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO/sub 2/, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO/sub 2/, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40/sup 0/C instead of at about 270/sup 0/C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementary particles or cosmic rays.

  12. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, Param H.; Hunt, Arlon J.

    1986-01-01

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO.sub.2, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO.sub.2, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40.degree. C. instead of at about 270.degree. C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementry particles or cosmic rays.

  13. Semiotic foundation for multisensor-multilook fusion

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.

    1998-07-01

    This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.

  14. Multichannel/Multisensor Signal Processing In Uncertain Environments With Application To Multitarget Tracking.

    DTIC Science & Technology

    1998-05-22

    impulse response estimation for MIMO channels using a Godard cost function," IEEE Trans. Signal Processing, vol. SP-45, pp. 268- 271, Jan. 1997. [3...using a Godard cost function," IEEE Trans. Signal Processing, vol. SP-45, pp. 268- 271, Jan. 1997. 24 [3] Special Issue, IEEE Transactions on Signal...excluding z = 0), then finite length inverse filters suffice. For an analysis and further elaborations, see [22] and [16] where a Godard cost function is

  15. Photorefractive processing for large adaptive phased arrays

    NASA Astrophysics Data System (ADS)

    Weverka, Robert T.; Wagner, Kelvin; Sarto, Anthony

    1996-03-01

    An adaptive null-steering phased-array optical processor that utilizes a photorefractive crystal to time integrate the adaptive weights and null out correlated jammers is described. This is a beam-steering processor in which the temporal waveform of the desired signal is known but the look direction is not. The processor computes the angle(s) of arrival of the desired signal and steers the array to look in that direction while rotating the nulls of the antenna pattern toward any narrow-band jammers that may be present. We have experimentally demonstrated a simplified version of this adaptive phased-array-radar processor that nulls out the narrow-band jammers by using feedback-correlation detection. In this processor it is assumed that we know a priori only that the signal is broadband and the jammers are narrow band. These are examples of a class of optical processors that use the angular selectivity of volume holograms to form the nulls and look directions in an adaptive phased-array-radar pattern and thereby to harness the computational abilities of three-dimensional parallelism in the volume of photorefractive crystals. The development of this processing in volume holographic system has led to a new algorithm for phased-array-radar processing that uses fewer tapped-delay lines than does the classic time-domain beam former. The optical implementation of the new algorithm has the further advantage of utilization of a single photorefractive crystal to implement as many as a million adaptive weights, allowing the radar system to scale to large size with no increase in processing hardware.

  16. Processing Requirements For Multi-Sensor, Low-Cost Brilliant Munitions

    NASA Astrophysics Data System (ADS)

    Klein, Lawrence A.; Kassis, Suhail Y.

    1989-09-01

    A generic mission for an autonomous fire-and-forget brilliant munition is presented and used to identify the functions that an embedded signal processor must perform. Based on these functions and other operational factors (such as weather, larger search areas, lower false alarm rates, and munition maneuverability), the processing loads in bits/second, messages/second, operations/second, and instructions/second are derived. The paper concludes with an evaluation of general implementation issues, such as the requirements for data fusion, distributed and parallel processing architectures, trusted software, and low-cost hardware.

  17. Processing and Exploitation of Multisensor Optical Data for Coastal Water Applications- The HIGHROC Project

    NASA Astrophysics Data System (ADS)

    Ruddick, Kevin; Brockmann, Carsten; Creach, Veronique; De Keukelaere, Liesbeth; Doxaran, David; Forster, Rodney; Jaccard, Pierre; Knaeps, Els; Leberton, Carole; Ledang, Anna Birgitta; Nechad, Bouchra; Norli, Marit; Nova, Stefani; Ody, Anouck; Pringle, Nicholas; Sorensen, Kai; Stelzer, Kerstin; Van der Zande, Dimitry; Vanhellemont, Quinten

    2016-08-01

    The FP7/HIGHROC ("HIGH spatial and temporal Resolution Ocean Colour") Project is developing the next generation of optical products for coastal water services. These products are based on both mainstream ocean colour sensors (Sentinel-3/OLCI, VIIRS) and other satellite missions such as the meteorological MSG/SEVIRI sensors and the land-oriented Landsat-8 (L8) and Sentinel-2 (S2) missions. The geostationary SEVIRI gives data every 15 minutes, offering much better temporal coverage in partially cloudy periods and the possibility to follow diurnal and tidal processes in cloud-free periods. S2 and L8 offer much better spatial resolution, down to 10m (S2), allowing detection of many human impacts invisible at 300m resolution. HIGHROC R&D includes the development of algorithms, acquisition of in situ measurements and programming of image processing chains. The new products and services will be tested during User Service Trials covering a range of applications including coastal water quality monitoring, Environmental Impact Assessment and sediment transport.

  18. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  19. Digital processing of array seismic recordings

    USGS Publications Warehouse

    Ryall, Alan; Birtill, John

    1962-01-01

    This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.

  20. Design, processing, and testing of LSI arrays for space station

    NASA Technical Reports Server (NTRS)

    Ipri, A. C.

    1976-01-01

    The applicability of a particular process for the fabrication of large scale integrated circuits is described. Test arrays were designed, built, and tested, and then utilized. A set of optimum dimensions for LSI arrays was generated. The arrays were applied to yield improvement through process innovation, and additional applications were suggested in the areas of yield prediction, yield modeling, and process reliability.

  1. Gallium arsenide processing for gate array logic

    NASA Technical Reports Server (NTRS)

    Cole, Eric D.

    1989-01-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  2. Gallium arsenide processing for gate array logic

    NASA Astrophysics Data System (ADS)

    Cole, Eric D.

    1989-09-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  3. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System

    PubMed Central

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-01-01

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects. PMID:27517935

  4. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System.

    PubMed

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-08-10

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects.

  5. Hierarchical Robot Control In A Multisensor Environment

    NASA Astrophysics Data System (ADS)

    Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari

    1987-03-01

    Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas

  6. Array signal processing in the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  7. An algorithm for signal processing in multibeam antenna arrays

    NASA Astrophysics Data System (ADS)

    Danilevskii, L. N.; Domanov, Iu. A.; Korobko, O. V.

    1980-09-01

    A signal processing method for multibeam antenna arrays is presented which can be used to effectively reduce discrete-phasing sidelobes. Calculations of an 11-element array are presented as an example.

  8. The influence of the arrangements of multi-sensor probe arrays on the accuracy of simultaneously measured velocity and velocity gradient-based statistics in turbulent shear flows

    NASA Astrophysics Data System (ADS)

    Vukoslavčević, P. V.; Wallace, J. M.

    2013-06-01

    A highly resolved turbulent channel flow direct numerical simulation (DNS) with Re τ = 200 has been used to investigate the influence of the arrangements of the arrays (array configurations), within the sensing area of a multi-array hot-wire probe on the measurement accuracy of velocity and velocity gradient-based statistics. To eliminate all effects related to the sensor response and array characteristics (such as sensor dimensions, overheat ratio, thermal cross talk, number and orientations of the sensors and uniqueness range) so that this study could be focused solely on the effects of the array configurations (positions and separations), a concept of a perfect array was introduced, that is, one that can exactly and simultaneously measure all three velocity components at its center. The velocity component values, measured by these perfect arrays, are simply the DNS values computed at these points. Using these velocity components, the velocity and velocity gradient-based statistics were calculated assuming a linear velocity variation over the probes' sensing areas. The calculated values are compared to the DNS values for various array arrangements to study the influence of these arrangements on the measurement accuracy. Typical array configurations that previously have been used for physical probes were tested. It is demonstrated that the array arrangements strongly influence the accuracy of some of the velocity and velocity gradient-based statistics and that no single configuration exists, for a given spatial resolution, which gives the best accuracy for all of the statistics characterizing a turbulent shear flow.

  9. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  10. Synergetic Multisensor Fusion

    DTIC Science & Technology

    1990-11-30

    technology have led to increased interest in using DEMs for navigation and other applications. In particular, DEMs are attractive for use in aircraft...Multisensor Fusion for Computer Vision [67]. 30 6. POSI!IONAL zSTIM&TION TECEnIQUzs FOR AN OUTDOOR MOBLE ROBOT The autonomous navigation of mobile robots is

  11. Digital image processing software system using an array processor

    SciTech Connect

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-03-10

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table.

  12. Reduced Complexity High Performance Array Processing

    DTIC Science & Technology

    1998-01-01

    constrained beamforming,” IEEE Transactions on Aerospace and Electronic Systems, January 1982. [5] J . Scott Goldstein and Irving S. Reed, “Theory of...partially adaptive radar,” IEEE Transactions on Aerospace and Electronic Systems, October 1997. [6] J . Scott Goldstein, Irving S. Reed, and John A...Computers. [7] Barry D. Van Veen, “Eigenstructure-based partially adaptive array design,” IEEE Transactions on Antennas and Propagation, March 1988. [8] J

  13. Multi-Sensor Inspection Telerobot

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Hayati, S.; Volpe, R.

    1994-01-01

    This paper describes a telerobotic multi-sensor inspection system for space platforms developed at the Jet Propulsion Laboratory. A multi-sensor inspection end-effector incorporates cameras and lighting for visual inspection, as well as temperature and gas leak-detection sensors.

  14. Fabrication of Nanohole Array via Nanodot Array Using Simple Self-Assembly Process of Diblock Copolymer

    NASA Astrophysics Data System (ADS)

    Matsuyama, Tsuyoshi; Kawata, Yoshimasa

    2007-06-01

    We present a simple self-assembly process for fabricating a nanohole array via a nanodot array on a glass substrate by dripping ethanol onto the nanodot array. It is found that well-aligned arrays of nanoholes as well as nanodots are formed on the whole surface of the glass. A dot is transformed into a hole, and the alignment of the nanodots strongly reflects that of the nanoholes. We find that the change in the depth of holes agrees well with the change in the surface energy with the ethanol concentration in the aqueous solution. We believe that the interfacial energy between the nanodots and the dripped ethanol causes the transformation from nanodots into nanoholes. The nanohole arrays are directly applicable to molds for nanopatterned media used in high-density near-field optical data storage. The bit data can be stored and read out using probes with small apertures.

  15. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  16. Performance Comparison of Superresolution Array Processing Algorithms. Revised

    DTIC Science & Technology

    2007-11-02

    OF SUPERRESOLUTION ARRAY PROCESSING ALGORITHMS A.J. BARABELL J. CAPON D.F. DeLONG K.D. SENNE Group 44 J.R. JOHNSON Group 96 PROJECT REPORT...adaptive superresolution direction finding and spatial nulling to support sig- nal copy in the presence of strong cochannel interference. The need for such... superresolution array processing have their origin in spectral estimation for time series. Since the sampling of a function in time is analogous to

  17. Multi-Sensor Distributive On-line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Astrophysics Data System (ADS)

    Teng, W.; Berrick, S.; Leptoukh, G.; Liu, Z.; Rui, H.; Pham, L.; Shen, S.; Zhu, T.

    2004-12-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Archive Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM Online Visualization and Analysis System (TOVAS), which will operationally provide precipitation and other satellite data products and services. AIS outputs will be integrated into existing operational decision support systems for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical parameters, area of interest, and time period; and the system generates an output on screen in a matter of seconds. The currently available output options are (1) area plot - averaged or accumulated over any available data period for any rectangular area; (2) time plot - time series averaged over any rectangular area; (3) Hovmoller plots - longitude-time and latitude-time plots; (4) ASCII

  18. Multi-Sensor Distributive On-Line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Technical Reports Server (NTRS)

    Teng, William; Berrick, Steve; Leptuokh, Gregory; Liu, Zhong; Rui, Hualan; Pham, Long; Shen, Suhung; Zhu, Tong

    2004-01-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM On-line Visualization and Analysis System precipitation and other satellite data products and services. AIS outputs will be ,integrated into existing operational decision support system for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that ,allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical ,parameters, area of interest, and time period; and the system generates an output ,on screen in a matter of seconds.

  19. A smart multisensor approach to assist blind people in specific urban navigation tasks.

    PubMed

    Ando, B

    2008-12-01

    Visually impaired people are often discouraged in using electronic aids due to complexity of operation, large amount of training, nonoptimized degree of information provided to the user, and high cost. In this paper, a new multisensor architecture is discussed, which would help blind people to perform urban mobility tasks. The device is based on a multisensor strategy and adopts smart signal processing.

  20. Sonar array processing borrows from geophysics

    SciTech Connect

    Chen, K.

    1989-09-01

    The author reports a recent advance in sonar signal processing that has potential military application. It improves signal extraction by modifying a technique devised by a geophysicist. Sonar signal processing is used to track submarine and surface targets, such as aircraft carriers, oil tankers, and, in commercial applications, schools of fish or sunken treasure. Similar signal-processing techniques help radio astronomers track galaxies, physicians see images of the body interior, and geophysicists map the ocean floor or find oil. This hydrid technique, applied in an experimental system, can help resolve strong signals as well as weak ones in the same step.

  1. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  2. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  3. Underwater (UW) Unexploded Ordnance (UXO) Multi-Sensor Data Base (MSDB) Collection

    DTIC Science & Technology

    2009-07-01

    Andrews Bay, FL. Phase I tests demonstrated that the acoustic (Buried Object Scanning Sonar) and EM sensors (Realtime Gradiometer and GEM-3 array) could...IMU Inertial Measurement Unit LSG Laser Scalar Gradiometer mm Millimeter MSDB Multi-Sensor Data Base MTA Marine Towed Array MTADS Multi... Gradiometer RTK Real Time Kinematic SAIC Science Applications International Corporation SAS Synthetic Aperture Sonar SERDP Strategic

  4. The Signal Processing Firmware for the Low Frequency Aperture Array

    NASA Astrophysics Data System (ADS)

    Comoretto, Gianni; Chiello, Riccardo; Roberts, Matt; Halsall, Rob; Adami, Kristian Zarb; Alderighi, Monica; Aminaei, Amin; Baker, Jeremy; Belli, Carolina; Chiarucci, Simone; D’Angelo, Sergio; De Marco, Andrea; Mura, Gabriele Dalle; Magro, Alessio; Mattana, Andrea; Monari, Jader; Naldi, Giovanni; Pastore, Sandro; Perini, Federico; Poloni, Marco; Pupillo, Giuseppe; Rusticelli, Simone; Schiaffino, Marco; Schillirò, Francesco; Zaccaro, Emanuele

    The signal processing firmware that has been developed for the Low Frequency Aperture Array component of the Square Kilometre Array (SKA) is described. The firmware is implemented on a dual FPGA board, that is capable of processing the streams from 16 dual polarization antennas. Data processing includes channelization of the sampled data for each antenna, correction for instrumental response and for geometric delays and formation of one or more beams by combining the aligned streams. The channelizer uses an oversampling polyphase filterbank architecture, allowing a frequency continuous processing of the input signal without discontinuities between spectral channels. Each board processes the streams from 16 antennas, as part of larger beamforming system, linked by standard Ethernet interconnections. These are envisaged to be 8192 of these signal processing platforms in the first phase of the SKA so particular attention has been devoted to ensure the design is low cost and low power.

  5. Dimpled ball grid array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    A 472 dimpled ball grid array (D-BGA) package has not been used in past space flight environments, therefore it was necessary to develop a process that would yield robust and reliable solder joints. The process developing assembly, inspection and rework techniques, were verified by conducting environmental tests. Since the 472 D-BGA packages passed the above environmental tests within the specifications, the process was successfully developed for space flight electronics.

  6. True-Time-Delay Adaptive Array Processing Using Photorefractive Crystals

    NASA Astrophysics Data System (ADS)

    Kriehn, G. R.; Wagner, K.

    Radio frequency (RF) signal processing has proven to be a fertile application area when using photorefractive-based, optical processing techniques. This is due to a photorefractive material's capability to record gratings and diffract off these gratings with optically modulated beams that contain a wide RF bandwidth, and include applications such as the bias-free time-integrating correlator [1], adaptive signal processing, and jammer excision, [2, 3, 4]. Photorefractive processing of signals from RF antenna arrays is especially appropriate because of the massive parallelism that is readily achievable in a photorefractive crystal (in which many resolvable beams can be incident on a single crystal simultaneously—each coming from an optical modulator driven by a separate RF antenna element), and because a number of approaches for adaptive array processing using photorefractive crystals have been successfully investigated [5, 6]. In these types of applications, the adaptive weight coefficients are represented by the amplitude and phase of the holographic gratings, and many millions of such adaptive weights can be multiplexed within the volume of a photorefractive crystal. RF modulated optical signals from each array element are diffracted from the adaptively recorded photorefractive gratings (which can be multiplexed either angularly or spatially), and are then coherently combined with the appropriate amplitude weights and phase shifts to effectively steer the angular receptivity pattern of the antenna array toward the desired arriving signal. Likewise, the antenna nulls can also be rotated toward unwanted narrowband jammers for extinction, thereby optimizing the signal-to-interference-plus-noise ratio.

  7. Multisensor configurations for early sniper detection

    NASA Astrophysics Data System (ADS)

    Lindgren, D.; Bank, D.; Carlsson, L.; Dulski, R.; Duval, Y.; Fournier, G.; Grasser, R.; Habberstad, H.; Jacquelard, C.; Kastek, M.; Otterlei, R.; Piau, G.-P.; Pierre, F.; Renhorn, I.; Sjöqvist, L.; Steinvall, O.; Trzaskawka, P.

    2011-11-01

    This contribution reports some of the fusion results from the EDA SNIPOD project, where different multisensor configurations for sniper detection and localization have been studied. A project aim has been to cover the whole time line from sniper transport and establishment to shot. To do so, different optical sensors with and without laser illumination have been tested, as well as acoustic arrays and solid state projectile radar. A sensor fusion node collects detections and background statistics from all sensors and employs hypothesis testing and multisensor estimation programs to produce unified and reliable sniper alarms and accurate sniper localizations. Operator interfaces that connect to the fusion node should be able to support both sniper countermeasures and the guidance of personnel to safety. Although the integrated platform has not been actually built, sensors have been evaluated at common field trials with military ammunitions in the caliber range 5.56 to 12.7 mm, and at sniper distances up to 900 m. It is concluded that integrating complementary sensors for pre- and postshot sniper detection in a common system with automatic detection and fusion will give superior performance, compared to stand alone sensors. A practical system is most likely designed with a cost effective subset of available complementary sensors.

  8. IN-SITU IONIC CHEMICAL ANALYSIS OF FRESH WATER VIA A NOVEL COMBINED MULTI-SENSOR / SIGNAL PROCESSING ARCHITECTURE

    NASA Astrophysics Data System (ADS)

    Mueller, A. V.; Hemond, H.

    2009-12-01

    The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing

  9. Electro-optical processing of phased-array antenna data

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Casasayas, F.

    1975-01-01

    An on-line two-dimensional optical processor has been used to process simulated linear and planar phased-array radar data off-line but at real-time data rates. The input transducer is an electron-beam-addressed KD2PO4 light valve.

  10. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach.

  11. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  12. Application of Seismic Array Processing to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Meng, L.; Allen, R. M.; Ampuero, J. P.

    2013-12-01

    Earthquake early warning (EEW) systems that can issue warnings prior to the arrival of strong ground shaking during an earthquake are essential in mitigating seismic hazard. Many of the currently operating EEW systems work on the basis of empirical magnitude-amplitude/frequency scaling relations for a point source. This approach is of limited effectiveness for large events, such as the 2011 Tohoku-Oki earthquake, for which ignoring finite source effects may result in underestimation of the magnitude. Here, we explore the concept of characterizing rupture dimensions in real time for EEW using clusters of dense low-cost accelerometers located near active faults. Back tracing the waveforms recorded by such arrays allows the estimation of the earthquake rupture size, duration and directivity in real-time, which enables the EEW of M > 7 earthquakes. The concept is demonstrated with the 2004 Parkfield earthquake, one of the few big events (M>6) that have been recorded by a local small-scale seismic array (UPSAR array, Fletcher et al, 2006). We first test the approach against synthetic rupture scenarios constructed by superposition of empirical Green's functions. We find it important to correct for the bias in back azimuth induced by dipping structures beneath the array. We implemented the proposed methodology to the mainshock in a simulated real-time environment. After calibrating the dipping-layer effect with data from smaller events, we obtained an estimated rupture length of 9 km, consistent with the distance between the two main high frequency subevents identified by back-projection using all local stations (Allman and Shearer, 2007). We proposed to deploy small-scale arrays every 30 km along the San Andreas Fault. The array processing is performed in local processing centers at each array. The output is compared with finite fault solutions based on real-time GPS system and then incorporated into the standard ElarmS system. The optimal aperture and array geometry is

  13. Vehicle passes detector based on multi-sensor analysis

    NASA Astrophysics Data System (ADS)

    Bocharov, D.; Sidorchuk, D.; Konovalenko, I.; Koptelov, I.

    2015-02-01

    The study concerned deals with a new approach to the problem of detecting vehicle passes in vision-based automatic vehicle classification system. Essential non-affinity image variations and signals from induction loop are the events that can be considered as detectors of an object presence. We propose several vehicle detection techniques based on image processing and induction loop signal analysis. Also we suggest a combined method based on multi-sensor analysis to improve vehicle detection performance. Experimental results in complex outdoor environments show that the proposed multi-sensor algorithm is effective for vehicles detection.

  14. Breath alcohol, multisensor arrays, and electronic noses

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils; Winquist, Fredrik

    1997-01-01

    The concept behind a volatile compound mapper, or electronic nose, is to use the combination of multiple gas sensors and pattern recognition techniques to detect and quantify substances in gas mixtures. There are several different kinds of sensors which have been developed during recent years of which the base techniques are conducting polymers, piezo electrical crystals and solid state devices. In this work we have used a combination of gas sensitive field effect devices and semiconducting metal oxides. The most useful pattern recognition routine was found to be ANNs, which is a mathematical approximation of the human neural network. The aim of this work is to evaluate the possibility of using electronic noses in field instruments to detect drugs, arson residues, explosives etc. As a test application we have chosen breath alcohol measurements. There are several reasons for this. Breath samples are a quite complex mixture contains between 200 and 300 substances at trace levels. The alcohol level is low but still possible to handle. There are needs for replacing large and heavy mobile instruments with smaller devices. Current instrumentation is rather sensitive to interfering substances. The work so far has dealt with sampling, how to introduce ethanol and other substances in the breath, correlation measurements between the electronic nose and headspace GC, and how to evaluate the sensor signals.

  15. Multisensor Fire Observations

    NASA Technical Reports Server (NTRS)

    Boquist, C.

    2004-01-01

    This DVD includes animations of multisensor fire observations from the following satellite sources: Landsat, GOES, TOMS, Terra, QuikSCAT, and TRMM. Some of the animations are included in multiple versions of a short video presentation on the DVD which focuses on the Hayman, Rodeo-Chediski, and Biscuit fires during the 2002 North American fire season. In one version of the presentation, MODIS, TRMM, GOES, and QuikSCAT data are incorporated into the animations of these wildfires. These data products provided rain, wind, cloud, and aerosol data on the fires, and monitored the smoke and destruction created by them. Another presentation on the DVD consists of a panel discussion, in which experts from academia, NASA, and the U.S. Forest Service answer questions on the role of NASA in fighting forest fires, the role of the Terra satellite and its instruments, including the Moderate Resolution Imaging Spectroradiometer (MODIS), in fire fighting decision making, and the role of fire in the Earth's climate. The third section of the DVD features several animations of fires over the years 2001-2003, including animations of global and North American fires, and specific fires from 2003 in California, Washington, Montana, and Arizona.

  16. Multisensor Data Integration Techniques

    NASA Technical Reports Server (NTRS)

    Evans, D. L.; Blake, P. L.; Conel, J. E.; Lang, H. R.; Logan, T. L.; Mcguffie, B. A.; Paylor, E. D.; Singer, R. B.; Schenck, L. R.

    1985-01-01

    The availability of data from sensors operating in several different wavelength regions had led to the development of new techniques and strategies for both data management and image analysis. Work is ongoing to develop computer techniques for analysis of integrated data sets. These techniques include coregistration of multisensor images, rectification of radar images in areas of topographic relief to ensure pixel to pixel registration with planimetric data sets, calibration of data so that signatures can be applied to remote areas, normalization of data acquired with disparate sensors and determination of extended spectral signatures of surface units. In addition, software is being developed to analyze coregistrated digital terrain and image data so that automated stratigraphic and structural analyses can be performed. These software procedures include: strike and dip determination, terrain profile generation, stratigraphic column generation, stratigraphic thickness measurements, structural cross-section generation, and creation of 3-D block diagrams. These techniques were applied to coregistered LANDSAT 4 Thematic Mapper (TM), Thermal Infrared Multispectral Scanner (TIMS), and multipolarization synthetic aperture radar (SAR) data of the Wind River Basin in Wyoming.

  17. Multi-sensor analysis of urban ecosystems

    USGS Publications Warehouse

    Gallo, K.; Ji, L.

    2004-01-01

    This study examines the synthesis of multiple space-based sensors to characterize the urban environment Single scene data (e.g., ASTER visible and near-IR surface reflectance, and land surface temperature data), multi-temporal data (e.g., one year of 16-day MODIS and AVHRR vegetation index data), and DMSP-OLS nighttime light data acquired in the early 1990s and 2000 were evaluated for urban ecosystem analysis. The advantages of a multi-sensor approach for the analysis of urban ecosystem processes are discussed.

  18. Navigation in Difficult Environments: Multi-Sensor Fusion Techniques

    DTIC Science & Technology

    2010-03-01

    data are applied to improve the robustness of secondary sensors’ signal processing. Applications of the multi-sensor fusion approach are illustrated...algorithms. 1.0 MOTIVATION Many existing and perspective applications of navigation systems would benefit notably from the ability to navigate...accurately and reliably in difficult environments. Examples of difficult navigation scenarios include urban canyons, indoor applications , radio

  19. Enhancement of data analysis through multisensor data fusion technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper focuses on application of multisensor data fusion for high quality data analysis and processing in measurement and instrumentation. A practical, general data fusion scheme is established on the basis of feature extraction and merging of data from multiple sensors. This scheme integrates...

  20. The Multi-sensor Airborne Radiation Survey (MARS) Instrument

    SciTech Connect

    Fast, James E.; Aalseth, Craig E.; Asner, David M.; Bonebrake, Christopher A.; Day, Anthony R.; Dorow, Kevin E.; Fuller, Erin S.; Glasgow, Brian D.; Hossbach, Todd W.; Hyronimus, Brian J.; Jensen, Jeffrey L.; Johnson, Kenneth I.; Jordan, David V.; Morgen, Gerald P.; Morris, Scott J.; Mullen, O Dennis; Myers, Allan W.; Pitts, W. Karl; Rohrer, John S.; Runkle, Robert C.; Seifert, Allen; Shergur, Jason M.; Stave, Sean C.; Tatishvili, Gocha; Thompson, Robert C.; Todd, Lindsay C.; Warren, Glen A.; Willett, Jesse A.; Wood, Lynn S.

    2013-01-11

    The Multi-sensor Airborne Radiation Survey (MARS) project has developed a new single cryostat detector array design for high purity germanium (HPGe) gama ray spectrometers that achieves the high detection efficiency required for stand-off detection and actionable characterization of radiological threats. This approach, we found, is necessary since a high efficiency HPGe detector can only be built as an array due to limitations in growing large germanium crystals. Moreover, the system is ruggedized and shock mounted for use in a variety of field applications, including airborne and maritime operations.

  1. The Multi-sensor Airborne Radiation Survey (MARS) instrument

    NASA Astrophysics Data System (ADS)

    Fast, J. E.; Aalseth, C. E.; Asner, D. M.; Bonebrake, C. A.; Day, A. R.; Dorow, K. E.; Fuller, E. S.; Glasgow, B. D.; Hossbach, T. W.; Hyronimus, B. J.; Jensen, J. L.; Johnson, K. I.; Jordan, D. V.; Morgen, G. P.; Morris, S. J.; Mullen, O. D.; Myers, A. W.; Pitts, W. K.; Rohrer, J. S.; Runkle, R. C.; Seifert, A.; Shergur, J. M.; Stave, S. C.; Tatishvili, G.; Thompson, R. C.; Todd, L. C.; Warren, G. A.; Willett, J. A.; Wood, L. S.

    2013-01-01

    The Multi-sensor Airborne Radiation Survey (MARS) project has developed a new single cryostat detector array design for high purity germanium (HPGe) gama ray spectrometers that achieves the high detection efficiency required for stand-off detection and actionable characterization of radiological threats. This approach is necessary since a high efficiency HPGe detector can only be built as an array due to limitations in growing large germanium crystals. The system is ruggedized and shock mounted for use in a variety of field applications, including airborne and maritime operations.

  2. Signal processing on the focal plane array: an overview

    NASA Astrophysics Data System (ADS)

    Graham, Roger W.; Trautfield, Walter C.; Taylor, Scott M.; Murray, Mark P.; Mesh, Frank J.; Horn, Stuart B.; Finch, James A.; Dang, Khoa V.; Caulfield, John T.

    2000-12-01

    Raytheon's Infrared Operations (RIO) has invented and developed a new class of focal plane arrays; the Adaptive IR Sensor (AIRS) and Thinfilm Analog Image Processor (TAIP). The AIRS FPA is based upon biologically inspired on-focal- plane circuitry, which adaptively removes detector and optic temperature drift and l/f induced fixed pattern noise. This third-generation multimode IRFPA, also called a Smart FPA, is a 256x256-array format capable of operation in four modes: 1) Direct Injection (DI), 2) Adaptive Non-uniformity Correction (NUC), 3) Motion/Edge Detection, and 4) Subframe Averaging. Also the 320x240 TAIP results have shown excellent image processing in the form of Spatial and Temporal processing.

  3. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  4. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  5. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  6. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    SciTech Connect

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  7. Information Measures for Multisensor Systems

    DTIC Science & Technology

    2013-12-11

    permuted to generate spectra that were non- physical but preserved the entropy of the source spectra. Another 1000 spectra were constructed to mimic co...Research Laboratory (NRL) has yielded probabilistic models for spectral data that enable the computation of information measures such as entropy and...22308 Chemical sensing Information theory Spectral data Information entropy Information divergence Mass spectrometry Infrared spectroscopy Multisensor

  8. Multisensor classification of sedimentary rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    1988-01-01

    A comparison is made between linear discriminant analysis and supervised classification results based on signatures from the Landsat TM, the Thermal Infrared Multispectral Scanner (TIMS), and airborne SAR, alone and combined into extended spectral signatures for seven sedimentary rock units exposed on the margin of the Wind River Basin, Wyoming. Results from a linear discriminant analysis showed that training-area classification accuracies based on the multisensor data were improved an average of 15 percent over TM alone, 24 percent over TIMS alone, and 46 percent over SAR alone, with similar improvement resulting when supervised multisensor classification maps were compared to supervised, individual sensor classification maps. When training area signatures were used to map spectrally similar materials in an adjacent area, the average classification accuracy improved 19 percent using the multisensor data over TM alone, 2 percent over TIMS alone, and 11 percent over SAR alone. It is concluded that certain sedimentary lithologies may be accurately mapped using a single sensor, but classification of a variety of rock types can be improved using multisensor data sets that are sensitive to different characteristics such as mineralogy and surface roughness.

  9. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  10. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Flynn, David S.; Wellfare, Michael R.; Richards, Mike; Prestwood, Lee

    1995-06-01

    The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.

  11. Investigation of certain characteristics of thinned antenna arrays with digital signal processing

    NASA Astrophysics Data System (ADS)

    Danilevskii, L. V.; Domanov, Iu. A.; Korobko, O. V.; Tauroginskii, B. I.

    1983-11-01

    A thinned array with correlation processing of input signals is examined. It is shown that amplitude quantization does not change the signal at the thinned-array input as compared with the complete antenna array. The discreteness of time delay causes the thinned and complete arrays to become nonequivalent. Computer-simulation results are presented.

  12. Multi-sensor electrometer

    NASA Technical Reports Server (NTRS)

    Gompf, Raymond (Inventor); Buehler, Martin C. (Inventor)

    2003-01-01

    An array of triboelectric sensors is used for testing the electrostatic properties of a remote environment. The sensors may be mounted in the heel of a robot arm scoop. To determine the triboelectric properties of a planet surface, the robot arm scoop may be rubbed on the soil of the planet and the triboelectrically developed charge measured. By having an array of sensors, different insulating materials may be measured simultaneously. The insulating materials may be selected so their triboelectric properties cover a desired range. By mounting the sensor on a robot arm scoop, the measurements can be obtained during an unmanned mission.

  13. SAR processing with stepped chirps and phased array antennas.

    SciTech Connect

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  14. Array Processing in the Cloud: the rasdaman Approach

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Dumitru, Alex

    2015-04-01

    The multi-dimensional array data model is gaining more and more attention when dealing with Big Data challenges in a variety of domains such as climate simulations, geographic information systems, medical imaging or astronomical observations. Solutions provided by classical Big Data tools such as Key-Value Stores and MapReduce, as well as traditional relational databases, proved to be limited in domains associated with multi-dimensional data. This problem has been addressed by the field of array databases, in which systems provide database services for raster data, without imposing limitations on the number of dimensions that a dataset can have. Examples of datasets commonly handled by array databases include 1-dimensional sensor data, 2-D satellite imagery, 3-D x/y/t image time series as well as x/y/z geophysical voxel data, and 4-D x/y/z/t weather data. And this can grow as large as simulations of the whole universe when it comes to astrophysics. rasdaman is a well established array database, which implements many optimizations for dealing with large data volumes and operation complexity. Among those, the latest one is intra-query parallelization support: a network of machines collaborate for answering a single array database query, by dividing it into independent sub-queries sent to different servers. This enables massive processing speed-ups, which promise solutions to research challenges on multi-Petabyte data cubes. There are several correlated factors which influence the speedup that intra-query parallelisation brings: the number of servers, the capabilities of each server, the quality of the network, the availability of the data to the server that needs it in order to compute the result and many more. In the effort of adapting the engine to cloud processing patterns, two main components have been identified: one that handles communication and gathers information about the arrays sitting on every server, and a processing unit responsible with dividing work

  15. Observer-based beamforming algorithm for acoustic array signal processing.

    PubMed

    Bai, Long; Huang, Xun

    2011-12-01

    In the field of noise identification with microphone arrays, conventional delay-and-sum (DAS) beamforming is the most popular signal processing technique. However, acoustic imaging results that are generated by DAS beamforming are easily influenced by background noise, particularly for in situ wind tunnel tests. Even when arithmetic averaging is used to statistically remove the interference from the background noise, the results are far from perfect because the interference from the coherent background noise is still present. In addition, DAS beamforming based on arithmetic averaging fails to deliver real-time computational capability. An observer-based approach is introduced in this paper. This so-called observer-based beamforming method has a recursive form similar to the state observer in classical control theory, thus holds a real-time computational capability. In addition, coherent background noise can be gradually rejected in iterations. Theoretical derivations of the observer-based beamforming algorithm are carefully developed in this paper. Two numerical simulations demonstrate the good coherent background noise rejection and real-time computational capability of the observer-based beamforming, which therefore can be regarded as an attractive algorithm for acoustic array signal processing.

  16. Phased array ultrasonic processing for enhanced and affordable diagnosis

    NASA Astrophysics Data System (ADS)

    Dominguez, N.; Rougeron, G.; Leberre, S.; Pautel, R.

    2013-01-01

    Phased array ultrasonic reconstruction techniques are often presented as a prospect for better and enhanced diagnosis. However to date few applications of these techniques can be found in the industry, partly because of questions on sizing but also because they often require heavy acquisitions. A way forward is then to propose techniques requiring less intensive data acquisition to make them broadly affordable in practice. Several approaches ranging from full matrix capture to paintbrush acquisitions are presented in this paper in combination with associated reconstruction processing like the Total Focusing Method (TFM) and the Time Domain Topological Energy (TDTE) techniques. Emphasis is given to their relative relevancies and practical applicability on typical configurations of interest for industries. The paper also presents recent efforts made on the acceleration of processing computation times, in particular through the use of GPU architectures.

  17. Array Signal Processing for Source Localization and Digital Communication.

    NASA Astrophysics Data System (ADS)

    Song, Bong-Gee

    Array antennas are used in several areas such as sonar and digital communication. Although array patterns may be different depending on applications, they are used with a view to collecting more data and obtaining better results. We first consider a passive sonar system in random environments where the index of refraction is random. While source localization problems for deterministic environments are well studied, they require accurate propagation models which are not available in random environments. We extend the localization problems to random environments. It has been shown that methods developed for deterministic environments fail in random environments because of the stochastic nature of acoustic propagation. Therefore, we model observations as random, and use a statistical signal processing technique combined with physics. The statistical signal model is provided by physics either empirically or theoretically. The performance technique relies on the accuracy of the statistical models. We have applied the maximum likelihood method to angle of arrival estimation and range estimation problems. The Cramer-Rao lower bounds have been also derived to predict the estimation performance. Next, we use the array antennas for diversity combining equalization in digital communications. Spatial diversity equalization is used in two ways; to improve bit error rate or to improve the transmission rate. This is feasible by using more antennas at the receiver end. We apply Helstrom's saddle point integration method to multi -input multi-output communication systems and show that a factor of 3-4 of channel reuse is possible. It is also shown that the advantage is because of the diversity itself not because of more taps. We further improve the equalization performance by joint pre- and postfilter design. Two different methods have been proposed according to the prefilter type. Although the mean square error is not easy to minimize, appropriate methods have been adopted and show

  18. Automated multisensor registration - Requirements and techniques

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kowk, Ronald; Curlander, John C.; Pang, Shirley S.

    1991-01-01

    The synergistic utilization of data from a suite of remote sensors requires multi-dimensional analysis of the data. Prior to this analysis, processing is required to correct for the systematic geometric distortions characteristic of each sensor, followed by a registration operation to remove any residual offsets. Furthermore, to handle a large volume of data and high data rates, the registration process must be fully automated. A conceptual approach is presented that integrates a variety of registration techniques and selects the candidate algorithm based on certain performance criteria. The performance requirements for an operational algorithm are formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several computational techniques are tested and their performance evaluated using a multisensor test data set assembled from the Landsat TM, Seasat, SIR-B, TIMS, and SPOT sensors. The results are discussed and recommendations for future studies are given.

  19. Multisensor smart system on a chip.

    PubMed

    Sellami, Louiza; Newcomb, Robert W

    2010-01-01

    Sensors are becoming of considerable importance in several areas, particularly in health care. Therefore, the development of inexpensive and miniaturized sensors that are highly selective and sensitive, and for which control and analysis is present all on one chip is very desirable. These types of sensors can be implemented with microelectromechanical systems (MEMS), and because they are fabricated on a semiconductor substrate, additional signal processing circuitry can easily be integrated into the chip, thereby readily providing additional functions, such as multiplexing and analog-to-digital conversion. Here, we present a general framework for the design of a multisensor system on a chip, which includes intelligent signal processing, as well as a built-in self-test and parameter adjustment units. Specifically, we outline the system architecture and develop a transistorized bridge biosensor for monitoring changes in the dielectric constant of a fluid, which could be used for in-home monitoring of kidney function of patients with renal failure.

  20. The Use of a Microcomputer Based Array Processor for Real Time Laser Velocimeter Data Processing

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1990-01-01

    The application of an array processor to laser velocimeter data processing is presented. The hardware is described along with the method of parallel programming required by the array processor. A portion of the data processing program is described in detail. The increase in computational speed of a microcomputer equipped with an array processor is illustrated by comparative testing with a minicomputer.

  1. Inspecting Composite Ceramic Armor Using Advanced Signal Processing Together with Phased Array Ultrasound

    DTIC Science & Technology

    2010-01-08

    processing techniques have been developed to help improve phased array ultrasonic inspection and analysis of multi-layered ceramic armor panels. The...INSPECTING COMPOSITE CERAMIC ARMOR USING ADVANCED SIGNAL PROCESSING TOGETHER WITH PHASED ARRAY ULTRASOUND J. S. Steckenrider Illinois College...immersion phased array ultrasound system. Some of these specimens had intentional design defects inserted interior to the specimens. Because of the very

  2. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the

  3. Multipath Array Processing for Co-Prime and Under-Sampled Sensor Arrays

    DTIC Science & Technology

    2015-09-30

    The resulting optimization problem results in closed-form solution that can be used in conjunction with a subspace method like MUSIC in order to get...the spatial power spectrum we see that the weak source is undetectable for each type of array. In Fig. 1(b), which compares the MUSIC spectra of the...synthetic array and the ULA, we are able to see both sources. In addition, the MUSIC spectrum for both the synthetic array and ULA are very similar

  4. An operational approach for infrasound multi-array processing

    NASA Astrophysics Data System (ADS)

    Vergoz, J.; Le Pichon, A.; Herry, P.; Blanc, E.

    2009-04-01

    The infrasound network of the International Monitoring Network (IMS) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) is currently not fully established. However, it has demonstrated its capability for detecting and locating infrasonic sources like meteorites as well as volcanic eruptions on a global scale. Unfortunately, such ground truth events are rare. Therefore, regions with dense infrasound networks have to be considered in order to test and calibrate detection and location procedures (Le Pichon. et al. 2008, J. Geophys. Res., 113, D12115, doi:10.1029/2007JD009509). In Central Europe, several years of continuous infrasound recordings are available for many infrasound arrays, where not all of them are part of the IMS. Infrasound waveforms are routinely processed in the 0.1 to 4 Hz frequency band using PMCC as a real-time detector. After applying a categorization procedure to remove detections associated with environmental noise, a blind fusion provides a list of events to be reviewed by the analyst. In order to check the geophysical consistency of the located events, an interactive tool has been developed. All results of the automatic processing are presented along with a realistic estimate of the network detection capability which incorporates near-real time atmospheric updates. Among the dominant acoustic sources of human origin, peaks in the geographical distribution of infrasound events correspond well with seismically active regions where operational mines have been identified. With the increasing number of IMS and regional cluster infrasound arrays deployed around the globe, conducting consistent analyses on a routine-basis provides an extensive database for discriminating between natural and artificial acoustic sources. Continuing such studies may also help quantifying relationships between infrasonic observables and atmospheric specification problems, thus opening new fields for investigations into inverse problems.

  5. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  6. Adaptive beamforming for array signal processing in aeroacoustic measurements.

    PubMed

    Huang, Xun; Bai, Long; Vinogradov, Igor; Peers, Edward

    2012-03-01

    Phased microphone arrays have become an important tool in the localization of noise sources for aeroacoustic applications. In most practical aerospace cases the conventional beamforming algorithm of the delay-and-sum type has been adopted. Conventional beamforming cannot take advantage of knowledge of the noise field, and thus has poorer resolution in the presence of noise and interference. Adaptive beamforming has been used for more than three decades to address these issues and has already achieved various degrees of success in areas of communication and sonar. In this work an adaptive beamforming algorithm designed specifically for aeroacoustic applications is discussed and applied to practical experimental data. It shows that the adaptive beamforming method could save significant amounts of post-processing time for a deconvolution method. For example, the adaptive beamforming method is able to reduce the DAMAS computation time by at least 60% for the practical case considered in this work. Therefore, adaptive beamforming can be considered as a promising signal processing method for aeroacoustic measurements.

  7. Analysis of Eigenspace Dynamics with Applications to Array Processing

    DTIC Science & Technology

    2014-09-30

    drifting arrays strongly affected by deformation or array-depth perturbations. The long-term goal of this effort is the development of physically...J., “Measurements of three-dimensional propagation in a continental shelf environment”, J. Acoust. Soc. Am. 125, pp. 1394-1402, 2009

  8. Damage Detection in Composite Structures with Wavenumber Array Data Processing

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Guided ultrasonic waves (GUW) have the potential to be an efficient and cost-effective method for rapid damage detection and quantification of large structures. Attractive features include sensitivity to a variety of damage types and the capability of traveling relatively long distances. They have proven to be an efficient approach for crack detection and localization in isotropic materials. However, techniques must be pushed beyond isotropic materials in order to be valid for composite aircraft components. This paper presents our study on GUW propagation and interaction with delamination damage in composite structures using wavenumber array data processing, together with advanced wave propagation simulations. Parallel elastodynamic finite integration technique (EFIT) is used for the example simulations. Multi-dimensional Fourier transform is used to convert time-space wavefield data into frequency-wavenumber domain. Wave propagation in the wavenumber-frequency domain shows clear distinction among the guided wave modes that are present. This allows for extracting a guided wave mode through filtering and reconstruction techniques. Presence of delamination causes spectral change accordingly. Results from 3D CFRP guided wave simulations with delamination damage in flat-plate specimens are used for wave interaction with structural defect study.

  9. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  10. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  11. A bit-serial VLSI array processing chip for image processing

    NASA Technical Reports Server (NTRS)

    Heaton, Robert; Blevins, Donald; Davis, Edward

    1990-01-01

    An array processing chip integrating 128 bit-serial processing elements (PEs) on a single die is discussed. Each PE has a 16-function logic unit, a single-bit adder, a 32-b variable-length shift register, and 1 kb of local RAM. Logic in each PE provides the capability to mask PEs individually. A modified grid interconnection scheme allows each PE to communicate with each of its eight nearest neighbors. A 32-b bus is used to transfer data to and from the array in a single cycle. Instruction execution is pipelined, enabling all instructions to be executed in a single cycle. The 1-micron CMOS design contains over 1.1 x 10 to the 6th transistors on an 11.0 x 11.7-mm die.

  12. Hierarchical Multisensor Image Understanding.

    DTIC Science & Technology

    1984-07-01

    paradigm based on graphs with attributed lists as nodes and image processing operators as p - arcs . This report reviews activities on the project during...other eight. Four of these proved to require context-based information in order to perform reasonable region discrimination. The remaining four I S... order to achieve a novel, real-time implementation of the conflict resolution rules. Each pixel in each labeled image is labeled "on" if it is part of a

  13. Signal processing techniques applied to a small circular seismic array

    NASA Astrophysics Data System (ADS)

    Mosher, C. C.

    1980-03-01

    The travel time method (TTM) for locating earthquakes and the wavefront curvature method (WCM), which determines distance to an event by measuring the curvature of the wavefront can be combined in a procedure referred to as Apparent Velocity Mapping (AVM). Apparent velocities for mine blasts and local earthquakes computed by the WCM are inverted for a velocity structure. The velocity structure is used in the TTM to relocate events. Model studies indicate that AVM can adequately resolve the velocity structure for the case of linear velocity-depth gradient. Surface waves from mine blasts recorded by the Central Minnesota Seismic Array were analyzed using a modification of the multiple filter analysis (MFA) technique to determine group arrival times at several stations of an array. The advantages of array MFA are that source location need not be known, lateral refraction can be detected and removed, and multiple arrivals can be separated. A modeling procedure that can be used with array MFA is described.

  14. Two subroutines used in processing of arrayed data files

    NASA Astrophysics Data System (ADS)

    Wu, Guang-Jie

    Arrayed data files are commonly used in astronomy. It may be a text file compiled by the software "EDIT" in common use, or a table compiled by Microsoft WORD, Excel, or a FITS format etc. In the database of CDS (Centre de Données astronomiques de Strasbourg), there are over thousands star catalogues. Sometimes you may get a star catalogue from a colleague or friend of you, which may be done by multivarious computer software and may have peculiarity of sorts. Especially, the star catalogue had been compiled several years ago. You may often need to deal with such listed multidimensional data files, and you may need to make new listed data files by yourself. This processing for reduce-dimension or add-dimension, if it was a kind of row treatment, is very easy to do with some famous software like "EDIT". However, maybe you are facing a column treatment. It may bring some trouble to you. In some cases, a character "Tab" may exist in the file. Different software, even different printers made by a certain company, may give dissimilar treatment to the character "Tab". The problem is that a Table-key can denote a single space-key, or can be up to eight space-keys. Sometimes, it may not be easy to find a ready-made program in your hands. If this data file could be opened by the software "EDIT", two programs in this paper can help you to understand what happened there, and help you to solve the problem conveniently and easily. It includes to convert all of the Table-keys to be corresponding space-keys, to pick-up, delete, add blanks, or link two data files as two columns in one file.

  15. Multisensor user authentication

    NASA Astrophysics Data System (ADS)

    Colombi, John M.; Krepp, D.; Rogers, Steven K.; Ruck, Dennis W.; Oxley, Mark E.

    1993-09-01

    User recognition is examined using neural and conventional techniques for processing speech and face images. This article for the first time attempts to overcome this significant problem of distortions inherently captured over multiple sessions (days). Speaker recognition uses both Linear Predictive Coding (LPC) cepstral and auditory neural model representations with speaker dependent codebook designs. For facial imagery, recognition is developed on a neural network that consists of a single hidden layer multilayer perceptron backpropagation network using either the raw data as inputs or principal components of the raw data computed using the Karhunen-Loeve Transform as inputs. The data consists of 10 subjects; each subject recorded utterances and had images collected for 10 days. The utterances collected were 400 rich phonetic sentences (4 sec), 200 subject name recordings (3 sec), and 100 imposter name recordings (3 sec). Face data consists of over 2000, 32 X 32 pixel, 8 bit gray scale images of the 10 subjects. Each subsystem attains over 90% verification accuracy individually using test data gathered on days following the training data.

  16. Multisensor user authentication

    NASA Astrophysics Data System (ADS)

    Colombi, John M.; Krepp, D.; Rogers, Steven K.; Ruck, Dennis W.; Oxley, Mark E.

    1993-08-01

    User recognition is examined using neural and conventional techniques for processing speech and face images. This article for the first time attempts to overcome this significant problem of distortions inherently captured over multiple sessions (days). Speaker recognition uses both Linear Predictive Coding (LPC) cepstral and auditory neural model representations with speaker dependent codebook designs. For facial imagery, recognition is developed on a neural network that consists of a single hidden layer multilayer perceptron backpropagation network using either the raw data as inputs or principal components of the raw data computed using the Karhunen-Loeve Transform as inputs. The data consists of 10 subjects; each subject recorded utterances and had images collected for 10 days. The utterances collected were 400 rich phonetic sentences (4 sec), 200 subject name recordings (3 sec), and 100 imposter name recordings (3 sec). Face data consists of over 2000, 32 X 32 pixel, 8 bit gray scale images of the 10 subjects. Each subsystem attains over 90% verification accuracy individually using test data gathered on day following the training data.

  17. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  18. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  19. Dimpled Ball Grid Array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    The 472 Dimpled Ball Grid Array (D-BGA) package has not been used in past space flight environments, therefore it is necessary to determine the robustness and reliability of the solder joints. The 472 D-BGA packages passed the above environmental tests within the specifications and are now qualified for use on space flight electronics.

  20. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  1. Body-Attachable and Stretchable Multisensors Integrated with Wirelessly Rechargeable Energy Storage Devices.

    PubMed

    Kim, Daeil; Kim, Doyeon; Lee, Hyunkyu; Jeong, Yu Ra; Lee, Seung-Jung; Yang, Gwangseok; Kim, Hyoungjun; Lee, Geumbee; Jeon, Sanggeun; Zi, Goangseup; Kim, Jihyun; Ha, Jeong Sook

    2016-01-27

    A stretchable multisensor system is successfully demonstrated with an integrated energy-storage device, an array of microsupercapacitors that can be repeatedly charged via a wireless radio-frequency power receiver on the same stretchable polymer substrate. The integrated devices are interconnected by a liquid-metal interconnection and operate stably without noticeable performance degradation under strain due to the skin attachment, and a uniaxial strain up to 50%.

  2. Monitoring of Reinforced Concrete Corrosion and Deterioration by Periodic Multi-Sensor Non-Destructive Evaluation

    NASA Astrophysics Data System (ADS)

    Arndt, R. W.; Cui, J.; Huston, D. R.

    2011-06-01

    The paper showcases a collaborative benchmark project evaluating NDE methods for deterioration monitoring of laboratory bridge decks. The focus of this effort is to design and build concrete test specimens, artificially induce and monitor corrosion, periodically perform multi-sensor NDE inspections, followed by 3D imaging and destructive validations. NDE methods used include ultrasonic echo array, ground penetrating radar (GPR), active infrared thermography with induction heating, and time-resolved thermography with induction heating.

  3. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  4. Imaging and data processing with the Low Frequency Space Array

    NASA Technical Reports Server (NTRS)

    Simon, R. S.; Spencer, J. H.; Dennison, B. K.; Weiler, K. W.; Johnston, K. J.; Kaiser, M. L.; Desch, M. D.; Fainberg, J.; Brown, L. W.; Stone, R. G.

    1987-01-01

    The Low Frequency Space Array (LFSA) is being designed to image the entire sky at extremely low radio frequencies with arcmin to subarcmin resolution. To accomplish this goal, data from LFSA will be continuously integrated for many months and then be used with aperture synthesis techniques to produce images. The three dimensional nature of LFSA and the effects of orbital geometry make LFSA a continuously evolving array which has an excellent synthesized point-response function. After transforming the data to produce an initial image, it is possible to remove low-level sidelobe responses remaining in the image and thereby produce a high dynamic-range image. Interference (both man-made and from solar-system objects) is a potential problem for LFSA, but appropriate data handling techniques are available which should eliminate any of its effects.

  5. Coherence analysis using canonical coordinate decomposition with applications to sparse processing and optimal array deployment

    NASA Astrophysics Data System (ADS)

    Azimi-Sadjadi, Mahmood R.; Pezeshki, Ali; Wade, Robert L.

    2004-09-01

    Sparse array processing methods are typically used to improve the spatial resolution of sensor arrays for the estimation of direction of arrival (DOA). The fundamental assumption behind these methods is that signals that are received by the sparse sensors (or a group of sensors) are coherent. However, coherence may vary significantly with the changes in environmental, terrain, and, operating conditions. In this paper canonical correlation analysis is used to study the variations in coherence between pairs of sub-arrays in a sparse array problem. The data set for this study is a subset of an acoustic signature data set, acquired from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set is collected using three wagon-wheel type arrays with five microphones. The results show that in nominal operating conditions, i.e. no extreme wind noise or masking effects by trees, building, etc., the signals collected at different sensor arrays are indeed coherent even at distant node separation.

  6. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  7. Signal processing for damage detection using two different array transducers.

    PubMed

    El Youbi, F; Grondel, S; Assaad, J

    2004-04-01

    This work describes an investigation into the development of a new health monitoring system for aeronautical applications. The health monitoring system is based on the emission and reception of Lamb waves by multi-element piezoelectric transducers (i.e., arrays) bonded to the structure. The emitter array consists of three different elementary bar transducers. These transducers have the same thickness and length but different widths. The receiver array has 32 same elements. This system offers the possibility to understand the nature of the generated waves and to determine the sensitivity of each mode to possible damage. It presents two principal advantages: Firstly, by exciting all elements in phase, it is possible to generate several Lamb modes in the same time. Secondly, the two-dimensional fourier transform (2D-FT) of the received signal can be easily computed. Experimental results concerning an aluminum plate with different hole sizes will be shown. The A0-, S0-, A1-, S1- and S2-modes are generated at the same time. This study shows that the A0 mode seems particularly interesting to detect flaws of this geometrical type.

  8. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  9. Multisensor detection and tracking of tactical ballistic missiles using knowledge-based state estimation

    NASA Astrophysics Data System (ADS)

    Woods, Edward; Queeney, Tom

    1994-06-01

    Westinghouse has developed and demonstrated a system that performs multisensor detection and tracking of tactical ballistic missiles (TBM). Under a USAF High Gear Program, we developed knowledge-based techniques to discriminate TBM targets from ground clutter, air breathing targets, and false alarms. Upon track initiation the optimal estimate of the target's launch point, impact point and instantaneous position was computed by fusing returns from noncollocated multiple sensors. The system also distinguishes different missile types during the boost phase and forms multiple hypotheses to account for measurement and knowledge base uncertainties. This paper outlines the salient features of the knowledge-based processing of the multisensor data.

  10. A solar array module fabrication process for HALE solar electric UAVs

    NASA Astrophysics Data System (ADS)

    Carey, P. G.; Aceves, R. C.; Colella, N. J.; Thompson, J. B.; Williams, K. A.

    1993-12-01

    We describe a fabrication process to manufacture high power to weight ratio flexible solar array modules for use on high altitude long endurance (HALE) solar electric unmanned air vehicles (UAV's). A span-loaded flying wing vehicle, known as the RAPTOR Pathfinder, is being employed as a flying test bed to expand the envelope of solar powered flight to high altitudes. It requires multiple light weight, flexible solar array modules able to endure adverse environmental conditions. At high altitudes the solar UV flux is significantly enhanced relative to sea level and extreme thermal variations occur. Our process involves first electrically interconnecting solar cells into an array followed by laminating them between top and bottom laminated layers into a solar array module. After careful evaluation of candidate polymers, fluoropolymer materials have been selected as the array laminate layers because of their inherent abilities to withstand the hostile conditions imposed by the environment.

  11. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  12. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  13. Neural Source Localization Using Advanced Sensor Array Signal Processing Techniques

    DTIC Science & Technology

    2001-10-25

    Singular Value Decomposition ( SVD ) of the spatial covariance matrix MxMjYR ℜ∈ of jY [7]. From a theoretic viewpoint, the information about the neural...source’s spatial amplitude distribution or “footprint” on the array side is contained in this covariance matrix. The SVD allows one to assess the P...Abstract unclassified Limitation of Abstract UU Number of Pages 4 2 of the SVD stage. The last step consists of selecting certain subbands from the full

  14. Design, processing and testing of LSI arrays, hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.

    1979-01-01

    Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.

  15. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  16. View and sensor planning for multi-sensor surface inspection

    NASA Astrophysics Data System (ADS)

    Gronle, Marc; Osten, Wolfgang

    2016-06-01

    Modern manufacturing processes enable the precise fabrication of high-value parts with high precision and performance. At the same time, the demand for flexible on-demand production of individual objects is continuously increasing. These requirements can only be met if inspection systems provide appropriate answers. One solution is the use of flexible, multi-sensor setups where multiple optical sensors with different fields of application are combined in one system. However, the challenge is then to assist the user in planning the inspection for individual parts. A manual planning requires an expert knowledge of the performance and functionality of every sensor. Therefore, software assistant systems help the user to objectively select the right sensors for a given inspection task. The planning step becomes still more difficult if the manufactured part has a complex form. The implication is that a sensor’s position must also be part of the planning process since it significantly influences the quality of the inspection. This paper describes a view and sensor planning approach for a multi-sensor surface inspection system in the context of optical topography measurements in the micro- and meso-scale range. In order to realize an online processing of the assistant system, a significant part of the calculations are done on the graphics processing unit (GPU).

  17. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  18. Monolithic optical phased-array transceiver in a standard SOI CMOS process.

    PubMed

    Abediasl, Hooman; Hashemi, Hossein

    2015-03-09

    Monolithic microwave phased arrays are turning mainstream in automotive radars and high-speed wireless communications fulfilling Gordon Moores 1965 prophecy to this effect. Optical phased arrays enable imaging, lidar, display, sensing, and holography. Advancements in fabrication technology has led to monolithic nanophotonic phased arrays, albeit without independent phase and amplitude control ability, integration with electronic circuitry, or including receive and transmit functions. We report the first monolithic optical phased array transceiver with independent control of amplitude and phase for each element using electronic circuitry that is tightly integrated with the nanophotonic components on one substrate using a commercial foundry CMOS SOI process. The 8 × 8 phased array chip includes thermo-optical tunable phase shifters and attenuators, nano-photonic antennas, and dedicated control electronics realized using CMOS transistors. The complex chip includes over 300 distinct optical components and over 74,000 distinct electrical components achieving the highest level of integration for any electronic-photonic system.

  19. Airborne Multisensor Pod System (AMPS) data management overview

    SciTech Connect

    Wiberg, J.D.; Blough, D.K.; Daugherty, W.R.; Hucks, J.A.; Gerhardstein, L.H.; Meitzler, W.D.; Melton, R.B.; Shoemaker, S.V.

    1994-09-01

    An overview of the Data Management Plan for the Airborne Multisensor Pod System (AMPS) pro-grain is provided in this document. The Pacific Northwest Laboratory (PNL) has been assigned the responsibility of data management for the program, which includes defining procedures for data management and data quality assessment. Data management is defined as the process of planning, acquiring, organizing, qualifying and disseminating data. The AMPS program was established by the U.S. Department of Energy (DOE), Office of Arms Control and Non-Proliferation (DOE/AN) and is integrated into the overall DOE AN-10.1 technology development program. Sensors used for collecting the data were developed under the on-site inspection, effluence analysis, and standoff sensor program, the AMPS program interacts with other technology programs of DOE/NN-20. This research will be conducted by both government and private industry. AMPS is a research and development program, and it is not intended for operational deployment, although the sensors and techniques developed could be used in follow-on operational systems. For a complete description of the AMPS program, see {open_quotes}Airborne Multisensor Pod System (AMPS) Program Plan{close_quotes}. The primary purpose of the AMPS is to collect high-quality multisensor data to be used in data fusion research to reduce interpretation problems associated with data overload and to derive better information than can be derived from any single sensor. To collect the data for the program, three wing-mounted pods containing instruments with sensors for collecting data will be flight certified on a U.S. Navy RP-3A aircraft. Secondary objectives of the AMPS program are sensor development and technology demonstration. Pod system integrators and instrument developers will be interested in the performance of their deployed sensors and their supporting data acquisition equipment.

  20. Tracing Crop Nitrogen Dynamics on the Field-Scale by Combining Multisensoral EO Data with an Integrated Process Model- A Validation Experiment for Cereals in Southern Germany

    NASA Astrophysics Data System (ADS)

    Hank, Tobias B.; Bach, Heike; Danner, Martin; Hodrius, Martina; Mauser, Wolfram

    2016-08-01

    Nitrogen, being the basic element for the construction of plant proteins and pigments, is one of the most important production factors for agricultural cultivation. High resolution and near real-time information on nitrogen status in the soil thus is of highest interest for economically and ecologically optimized fertilizer planning and application. Unfortunately, nitrogen storage in the soil column cannot be directly observed with Earth Observation (EO) instruments. Advanced EO supported process modelling approaches therefore must be applied that allow tracing the spatiotemporal dynamics of nitrogen transformation, translocation and transport in the soil and in the canopy. Before these models can be applied as decision support tools for smart farming, they must be carefully parameterized and validated. This study applies an advanced land surface process model (PROMET) to selected winter cereal fields in Southern Germany and correlates the model outputs to destructively sampled nitrogen data from the growing season of 2015 (17 sampling dates, 8 sample locations). The spatial parametrization of the process model thereby is supported by assimilating eight satellite images (5 times Landsat 8 OLI and 3 times RapidEye). It was found that the model is capable of realistically tracing the temporal and spatial dynamics of aboveground nitrogen uptake and allocation (R2 = 0.84, RMSE 31.3 kg ha-1).

  1. Multisensor multiresolution data fusion for improvement in classification

    NASA Astrophysics Data System (ADS)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  2. Advanced techniques for array processing. Final report, 1 Mar 89-30 Apr 91

    SciTech Connect

    Friedlander, B.

    1991-05-30

    Array processing technology is expected to be a key element in communication systems designed for the crowded and hostile environment of the future battlefield. While advanced array processing techniques have been under development for some time, their practical use has been very limited. This project addressed some of the issues which need to be resolved for a successful transition of these promising techniques from theory into practice. The main problem which was studied was that of finding the directions of multiple co-channel transmitters from measurements collected by an antenna array. Two key issues related to high-resolution direction finding were addressed: effects of system calibration errors, and effects of correlation between the received signals due to multipath propagation. A number of useful theoretical performance analysis results were derived, and computationally efficient direction estimation algorithms were developed. These results include: self-calibration techniques for antenna arrays, sensitivity analysis for high-resolution direction finding, extensions of the root-MUSIC algorithm to arbitrary arrays and to arrays with polarization diversity, and new techniques for direction finding in the presence of multipath based on array interpolation. (Author)

  3. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  4. Model-based Processing of Microcantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Sinensky, A K; Lee, C L; Rudd, R E; Burnham, A K

    2005-04-27

    We have developed a model-based processor (MBP) for a microcantilever-array sensor to detect target species in solution. We perform a proof-of-concept experiment, fit model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest, averaged deflection data and multi-channel data. For this evaluation we extract model parameters via a model-based estimation, perform a Gauss-Markov simulation, design the optimal MBP and apply it to measured experimental data. The performance of the MBP in the multi-channel case is evaluated by comparison to a ''smoother'' (averager) typically used for microcantilever signal analysis. It is shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, apart from a correctable systematic bias error.

  5. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  6. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  7. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  8. Site Specific Evaluation of Multisensor Capacitance Probes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multisensor capacitance probes (MCPs) are widely used for measuring soil water content (SWC) at the field scale. Although manufacturers supply a generic MCP calibration, many researchers recognize that MCPs should be calibrated for specific field conditions. MCPs measurements are typically associa...

  9. Multisensor fusion for 3-D defect characterization using wavelet basis function neural networks

    NASA Astrophysics Data System (ADS)

    Lim, Jaein; Udpa, Satish S.; Udpa, Lalita; Afzal, Muhammad

    2001-04-01

    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, has the ability to draw inferences that may not be feasible with data from a single sensor alone. In this paper, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL. Data is fused at the signal level. If the flux is oriented axially, the samples of the axial signal are measured along a direction parallel to the flaw, while the circumferential signal is measured in a direction that is perpendicular to the flaw. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. A boundary extraction algorithm is used to extract the defect areas in the image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. Finally, a wavelet basis function (WBF) neural network is employed to map the complex valued image appropriately to obtain the geometrical profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. Results show the effectiveness of the approach.

  10. Application of high-precision matching about multisensor in fast stereo imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Huijing; Zhou, Mei; Wu, Haohao; Zhang, Dandan

    2015-10-01

    High precision matching of linear array multi-sensor is the key to ensure fast stereo imaging. This paper has presented the general principle of active and passive imaging sensor, designed a high precision matching calibration system of linear array multi-sensor based on large-diameter collimator combined with assisted laser light source, and put forward an optical axis parallelism calibration technology suitable for linear array active and passive imaging sensor. This technology makes use of image acquisition system to obtain spot center, in order to match multi-linear array laser receive and transmit optical axes. At the same time, this paper uses linear visible light sources to extract the optical axis of the laser, then completes the parallelism calibration between lasers receive and transmit optical axes of multi-linear array sensors and active and passive optical axis. The matching relationship between the visible pixel and laser radar detecting element can be obtained when using this technique to calibrate the active and passive imaging sensor. And this relationship is applied to the fast stereo imaging experiment of active and passive imaging sensor and gained good imaging effect.

  11. An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study.

    PubMed

    Weinstein, Ronald S; Descour, Michael R; Liang, Chen; Barker, Gail; Scott, Katherine M; Richter, Lynne; Krupinski, Elizabeth A; Bhattacharyya, Achyut K; Davis, John R; Graham, Anna R; Rennels, Margaret; Russum, William C; Goodall, James F; Zhou, Pixuan; Olszak, Artur G; Williams, Bruce H; Wyant, James C; Bartels, Peter H

    2004-11-01

    This paper describes the design and fabrication of a novel array microscope for the first ultrarapid virtual slide processor (DMetrix DX-40 digital slide scanner). The array microscope optics consists of a stack of three 80-element 10 x 8-lenslet arrays, constituting a "lenslet array ensemble." The lenslet array ensemble is positioned over a glass slide. Uniquely shaped lenses in each of the lenslet arrays, arranged perpendicular to the glass slide constitute a single "miniaturized microscope." A high-pixel-density image sensor is attached to the top of the lenslet array ensemble. In operation, the lenslet array ensemble is transported by a motorized mechanism relative to the long axis of a glass slide. Each of the 80 miniaturized microscopes has a lateral field of view of 250 microns. The microscopes of each row of the array are offset from the microscopes in other rows. Scanning a glass slide with the array microscope produces seamless two-dimensional image data of the entire slide, that is, a virtual slide. The optical system has a numerical aperture of N.A.= 0.65, scans slides at a rate of 3 mm per second, and accrues up to 3,000 images per second from each of the 80 miniaturized microscopes. In the ultrarapid virtual slide processing cycle, the time for image acquisition takes 58 seconds for a 2.25 cm2 tissue section. An automatic slide loader enables the scanner to process up to 40 slides per hour without operator intervention. Slide scanning and image processing are done concurrently so that post-scan processing is eliminated. A virtual slide can be viewed over the Internet immediately after the scanning is complete. A validation study compared the diagnostic accuracy of pathologist case readers using array microscopy (with images viewed as virtual slides) and conventional light microscopy. Four senior pathologists diagnosed 30 breast surgical pathology cases each using both imaging modes, but on separate occasions. Of 120 case reads by array microscopy

  12. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array

    PubMed Central

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-01-01

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective. PMID:28042828

  13. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array.

    PubMed

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-12-30

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.

  14. A large-scale interactive one-dimensional array processing system. [for spectrophotometric data

    NASA Technical Reports Server (NTRS)

    Clark, R. N.

    1980-01-01

    The work describes a scientist/user oriented interactive program for processing one-dimensional arrays. It is shown that the program is oriented toward processing spectrophotometric astronomical data and can also be used for general I-D array processing. Further, the program has totally free format input with a sophisticated decoding capability which can cope with typographical plus other possible mistakes. Finally, a description of the program is given to provide information on implementing a large-scale data-reduction facility.

  15. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  16. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  17. Assembly and integration process of the first high density detector array for the Atacama Cosmology Telescope

    NASA Astrophysics Data System (ADS)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Ward, Jonathan; Schmitt, Benjamin L.; Henderson, Shawn; Koopman, Brian J.; Gallardo, Patricio A.; Vavagiakis, Eve M.; Niemack, Michael D.; McMahon, Jeff; Duff, Shannon M.; Schillaci, Alessandro; Hubmayr, Johannes; Hilton, Gene C.; Beall, James A.; Wollack, Edward J.

    2016-07-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroic Transition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  18. Quality assessment for multitemporal and multisensor image fusion

    NASA Astrophysics Data System (ADS)

    Ehlers, Manfred; Klonus, Sascha

    2008-10-01

    Generally, image fusion methods are classified into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. In this paper we focus on iconic techniques for image fusion. There exist a number of established fusion techniques that can be used to merge high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. These techniques, when applied to multitemporal and/or multisensoral image data, still create spatially enhanced datasets but usually at the expense of the spectral consistency. In this study, a series of nine multitemporal multispectral remote sensing images (seven SPOT scenes and one FORMOSAT scene) is fused with one panchromatic Ikonos image. A number of techniques are employed to analyze the quality of the fusion process. The images are visually and quantitatively evaluated for spectral characteristics preservation and for spatial resolution improvement. Overall, the Ehlers fusion which was developed for spectral characteristics preservation for multi-date and multi-sensor fusion showed the best results. It could not only be proven that the Ehlers fusion is superior to all other tested algorithms but also the only one that guarantees an excellent color preservation for all dates and sensors.

  19. Array Processing for Radar Clutter Reduction and Imaging of Ice-Bed Interface

    NASA Astrophysics Data System (ADS)

    Gogineni, P.; Leuschen, C.; Li, J.; Hoch, A.; Rodriguez-Morales, F.; Ledford, J.; Jezek, K.

    2007-12-01

    A major challenge in sounding of fast-flowing glaciers in Greenland and Antarctica is surface clutter, which masks weak returns from the ice-bed interface. The surface clutter is also a major problem in sounding and imaging sub-surface interfaces on Mars and other planets. We successfully applied array-processing techniques to reduce clutter and image ice-bed interfaces of polar ice sheets. These techniques and tools have potential applications to planetary observations. We developed a radar with array-processing capability to measure thickness of fast-flowing outlet glaciers and image the ice-bed interface. The radar operates over the frequency range from 140 to 160 MHz with about an 800- Watt peak transmit power with transmit and receive antenna arrays. The radar is designed such that pulse width and duration are programmable. The transmit-antenna array is fed with a beamshaping network to obtain low sidelobes. We designed the receiver such that it can process and digitize signals for each element of an eight- channel array. We collected data over several fast-flowing glaciers using a five-element antenna array, limited by available hardpoints to mount antennas, on a Twin Otter aircraft during the 2006 field season and a four-element array on a NASA P-3 aircraft during the 2007 field season. We used both adaptive and non-adaptive signal-processing algorithms to reduce clutter. We collected data over the Jacobshavn Isbrae and other fast-flowing outlet glaciers, and successfully measured the ice thickness and imaged the ice-bed interface. In this paper, we will provide a brief description of the radar, discuss clutter-reduction algorithms, present sample results, and discuss the application of these techniques to planetary observations.

  20. Asymmetric magnetization reversal process in Co nanohill arrays

    SciTech Connect

    Rosa, W. O.; Martinez, L.; Jaafar, M.; Asenjo, A.; Vazquez, M.

    2009-11-15

    Co thin films deposited by sputtering onto nanostructured polymer [poly(methyl methacrylate)] were prepared following replica-antireplica process based on porous alumina membrane. In addition, different capping layers were deposited onto Co nanohills. Morphological and compositional analysis was performed by atomic force microscopy and x-ray photoemission spectroscopy techniques to obtain information about the surface characteristics. The observed asymmetry in the magnetization reversal process at low temperatures is ascribed to the exchange bias generated by the ferromagnetic-antiferromagnetic interface promoted by the presence of Co oxide detected in all the samples. Especially relevant is the case of the Cr capping, where an enhanced magnetic anisotropy in the Co/Cr interface is deduced.

  1. Design, processing and testing of LSI arrays: Hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.

    1979-01-01

    Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.

  2. Application of Kalman Filter on Multisensor Fusion Tracking

    DTIC Science & Technology

    1992-12-01

    AD-A257 335 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELCT THESIS APPLICATION OF KALMAN FILTER ON MULTISENSOR FUSION TRACKING by Brian...WORK UNIT ELEMENT NO NO NO ACCESSION NO 11 TITLE (Include Security Classification) PPLICATION OF KALMAN FILTER ON MULTISENSOR FUSION TRACKING 12...FIELD GROUP SUB-GROUP fusion, Kalman Filter Multisensor Fusion Tracking 19 ABSTRACT (Continue on reverse if necessary and identify by block number)he

  3. Efficient Data Capture and Post-Processing for Real-Time Imaging Using AN Ultrasonic Array

    NASA Astrophysics Data System (ADS)

    Moreau, L.; Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Over the past few years, ultrasonic phased arrays have shown good potential for nondestructive testing (NDT), thanks to high resolution imaging algorithms. Many algorithms are based on the full matrix capture, obtained by firing each element of an ultrasonic array independently, and collecting the data with all elements. Because of the finite sound velocity in the specimen, two consecutive firings must be separated by a minimum time interval. Therefore, more array elements require longer data acquisition times. Moreover, if the array has N elements, then the full matrix contains N2 temporal signals to be processed. Because of the limited calculation speed of current computers, a large matrix of data can result in long post-processing times. In an industrial context where real-time imaging is desirable, it is crucial to reduce acquisition and/or post-processing times. This paper investigates methods designed to reduce acquisition and post-processing times for the total focusing method and wavenumber imaging algorithms. Limited transmission cycles are used to reduce data capture and post-processing. Post-processing times are further reduced by demodulating the data to temporal baseband frequencies. Results are presented so that a compromise can be made between acquisition time, post-processing time and image quality.

  4. Automated Source Depth Estimation Using Array Processing Techniques

    DTIC Science & Technology

    2009-10-14

    Processing Techniques W.N. Junek, J. Roman- Nieves , R.C. Kemerait, M.T. Woods, and J.P. Creasey 14 October 2009 Approved for public release; Distribution...NUMBER W.N. Junek, J. Roman- Nieves , R.C. Kemerait, M.T. Woods, and J.P. Creasey 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...2006 American Geophysical Union Conference, San Francisco, CA. Junek, W.N., J. Roman- Nieves , R.C. Kemerait, M.T. Woods, and J.P. Creasey, (2007

  5. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  6. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  7. A novel substrate for multisensor hyperspectral imaging.

    PubMed

    Ofner, J; Kirschner, J; Eitenberger, E; Friedbacher, G; Kasper-Giebl, A; Lohninger, H; Eisenmenger-Sittner, C; Lendl, B

    2017-03-01

    The quality of chemical imaging, especially multisensor hyperspectral imaging, strongly depends on sample preparation techniques and instrumental infrastructure but also on the choice of an appropriate imaging substrate. To optimize the combined imaging of Raman microspectroscopy, scanning-electron microscopy and energy-dispersive X-ray spectroscopy, a novel substrate was developed based on sputtering of highly purified aluminium onto classical microscope slides. The novel aluminium substrate overcomes several disadvantages of classical substrates like impurities of the substrate material and contamination of the surface as well as surface roughness and homogeneity. Therefore, it provides excellent conditions for various hyperspectral imaging techniques and enables high-quality multisensor hyperspectral chemical imaging at submicron lateral resolutions.

  8. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  9. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  10. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  11. Effects of process parameters on the molding quality of the micro-needle array

    NASA Astrophysics Data System (ADS)

    Qiu, Z. J.; Ma, Z.; Gao, S.

    2016-07-01

    Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.

  12. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  13. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  14. Rapid prototyping process using linear array of high-power laser diodes

    NASA Astrophysics Data System (ADS)

    Zhu, Linquan; Cheng, Jun; Zhou, Hanchang

    2000-02-01

    Because of the weak points of the SLS spot Scanning process, a new rapid prototyping process -- SLS line scan using linear array of high power laser diodes regarded as energy sources is researched in this paper. A linear array with requisite length is formed by some high power laser diodes that can be derived individually. Beams of the linear array are transferred to the workplace and imaged some short and light lines by the corresponding optical collimators. They are lined up in a linear laser beam without separation whose length is equal to that of the linear array diodes. When sintering powdered material, the linear laser beam scans in one direction along x axis only. Only if the maximum line length is less than the y axial size of the workpiece, it is necessary that linear laser beam is lapped for some times in the y axis. The Scanning mode of x-y simultaneous guideways are used in this new system which differs entirely from the vibrating mirror scan. The scanning trace of the latter is an arc that will influence processing quality. This new process has higher efficiency and better quality than the traditional spot scanning method.

  15. Array Signal Processing Under Model Errors With Application to Speech Separation

    DTIC Science & Technology

    1992-10-31

    Acoust. Speech Sig. Proc., pp. 1149-1152, Alburqueque NM, 1990. [37] E. A. Patrick , Fundamentals of Pattern Recognition, Englewood Cliffs, NJ, 1972...Proc., Toronto, pp. 1365-1368, 1991. [411 S. U. Pillai , Array Signal Processing, Springer Verlag, NY, 1989. [42] B. Porat and B. Friedlander

  16. Assembly, integration, and verification (AIV) in ALMA: series processing of array elements

    NASA Astrophysics Data System (ADS)

    Lopez, Bernhard; Jager, Rieks; Whyborn, Nicholas D.; Knee, Lewis B. G.; McMullin, Joseph P.

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA will consist of at least 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. It is the responsibility of ALMA AIV to deliver the fully assembled, integrated, and verified antennas (array elements) to the telescope array. After an initial phase of infrastructure setup AIV activities began when the first ALMA antenna and subsystems became available in mid 2008. During the second semester of 2009 a project-wide effort was made to put in operation a first 3- antenna interferometer at the Array Operations Site (AOS). In 2010 the AIV focus was the transition from event-driven activities towards routine series production. Also, due to the ramp-up of operations activities, AIV underwent an organizational change from an autonomous department into a project within a strong matrix management structure. When the subsystem deliveries stabilized in early 2011, steady-state series processing could be achieved in an efficient and reliable manner. The challenge today is to maintain this production pace until completion towards the end of 2013. This paper describes the way ALMA AIV evolved successfully from the initial phase to the present steady-state of array element series processing. It elaborates on the different project phases and their relationships, presents processing statistics, illustrates the lessons learned and relevant best practices, and concludes with an outlook of the path towards completion.

  17. Adaptive and mobile ground sensor array.

    SciTech Connect

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer; Maish, Alexander B.

    2003-12-01

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.

  18. A novel method using adaptive hidden semi-Markov model for multi-sensor monitoring equipment health prognosis

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Dong, Ming; Lv, Wenyuan; Geng, Xiuli; Li, Yupeng

    2015-12-01

    Health prognosis for equipment is considered as a key process of the condition-based maintenance strategy. This paper presents an integrated framework for multi-sensor equipment diagnosis and prognosis based on adaptive hidden semi-Markov model (AHSMM). Unlike hidden semi-Markov model (HSMM), the basic algorithms in an AHSMM are first modified in order for decreasing computation and space complexity. Then, the maximum likelihood linear regression transformations method is used to train the output and duration distributions to re-estimate all unknown parameters. The AHSMM is used to identify the hidden degradation state and obtain the transition probabilities among health states and durations. Finally, through the proposed hazard rate equations, one can predict the useful remaining life of equipment with multi-sensor information. Our main results are verified in real world applications: monitoring hydraulic pumps from Caterpillar Inc. The results show that the proposed methods are more effective for multi-sensor monitoring equipment health prognosis.

  19. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    PubMed Central

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  20. High-density through-wafer copper via array in insulating glass mold using reflow process

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Woo; Lee, Seung-Ki; Park, Jae-Hyoung

    2015-04-01

    A high-density through-wafer vertical copper via array in insulating glass interposer is demonstrated. The glass reflow and bottom-up filling copper electroplating process enables fabrication of a vertical through-wafer copper via array with high aspect ratio and high density. The minimum diameter of the copper vias and the gaps in between are 20 and 10 µm, respectively. Three failures among one hundred measurement points were detected within a 1 cm2 area of the via array, and the resistance of the 20-µm-diameter copper via was measured to be 153 ± 23 mΩ using the four-probe method. The optical transmittance and RF performance of the reflowed glass substrate were compared with those of bare glass. The reliability of the copper via in harsh environments was evaluated through thermal shock and pressure cooker tests.

  1. Enhancing four-wave-mixing processes by nanowire arrays coupled to a gold film.

    PubMed

    Poutrina, Ekaterina; Ciracì, Cristian; Gauthier, Daniel J; Smith, David R

    2012-05-07

    We consider the process of four-wave mixing in an array of gold nanowires strongly coupled to a gold film. Using full-wave simulations, we perform a quantitative comparison of the four-wave mixing efficiency associated with a bare film and films with nanowire arrays. We find that the strongly localized surface plasmon resonances of the coupled nanowires provide an additional local field enhancement that, along with the delocalized surface plasmon of the film, produces an overall four-wave mixing efficiency enhancement of up to six orders of magnitude over that of the bare film. The enhancement occurs over a wide range of excitation angles. The film-coupled nanowire array is easily amenable to nanofabrication, and could find application as an ultra-compact component for integrated photonic and quantum optic systems.

  2. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    NASA Astrophysics Data System (ADS)

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-09-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer.

  3. Enhancement of Data Analysis Through Multisensor Data Fusion Technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Multi-sensor data fusion is an emerging technology that fuses data from multiple sensors in order to make a more accurate estimation of the environment through measurement and detection. Applications of multi-sensor data fusion cross a wide spectrum in military and civilian areas. With the rapid e...

  4. Hybrid Arrays for Chemical Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  5. A survey of multi-sensor data fusion systems

    NASA Astrophysics Data System (ADS)

    Linn, R. J.; Hall, D. L.; Llinas, J.

    1991-08-01

    Multisensor data fusion integrates data from multiple sensors (and types of sensors) to perform inferences which are more accurate and specific than those from processing single-sensor data. Levels of inference range from target detection and identification to higher level situation assessment and threat assessment. This paper provides a survey of more than 50 data fusion systems and summarizes their application, development environment, system status and key techniques. The techniques are mapped to a taxonomy previously developed by Hall and Linn (1990); these include positional fusion techniques, such as association and estimation, and identity fusion methods, including statistical methods, nonparametric methods, and cognitive techniques (e.g. templating, knowledge-based systems, and fuzzy reasoning). An assessment of the state of fusion system development is provided.

  6. Multi-sensor data fusion framework for CNC machining monitoring

    NASA Astrophysics Data System (ADS)

    Duro, João A.; Padget, Julian A.; Bowen, Chris R.; Kim, H. Alicia; Nassehi, Aydin

    2016-01-01

    Reliable machining monitoring systems are essential for lowering production time and manufacturing costs. Existing expensive monitoring systems focus on prevention/detection of tool malfunctions and provide information for process optimisation by force measurement. An alternative and cost-effective approach is monitoring acoustic emissions (AEs) from machining operations by acting as a robust proxy. The limitations of AEs include high sensitivity to sensor position and cutting parameters. In this paper, a novel multi-sensor data fusion framework is proposed to enable identification of the best sensor locations for monitoring cutting operations, identifying sensors that provide the best signal, and derivation of signals with an enhanced periodic component. Our experimental results reveal that by utilising the framework, and using only three sensors, signal interpretation improves substantially and the monitoring system reliability is enhanced for a wide range of machining parameters. The framework provides a route to overcoming the major limitations of AE based monitoring.

  7. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  8. Direct Growth of Crystalline Tungsten Oxide Nanorod Arrays by a Hydrothermal Process and Their Electrochromic Properties

    NASA Astrophysics Data System (ADS)

    Lu, Chih-Hao; Hon, Min Hsiung; Leu, Ing-Chi

    2016-12-01

    Transparent crystalline tungsten oxide nanorod arrays for use as an electrochromic layer have been directly prepared on fluorine-doped tin oxide-coated glass via a facile tungsten film-assisted hydrothermal process using aqueous tungsten hexachloride solution. X-ray diffraction analysis and field-emission scanning electron microscopy were used to characterize the phase and morphology of the grown nanostructures. Arrays of tungsten oxide nanorods with diameter of ˜22 nm and length of ˜240 nm were obtained at 200°C after 8 h of hydrothermal reaction. We propose a growth mechanism for the deposition of the monoclinic tungsten oxide phase in the hydrothermal environment. The tungsten film was first oxidized to tungsten oxide to provide seed sites for crystal growth and address the poor connection between the growing tungsten oxide and substrate. Aligned tungsten oxide nanorod arrays can be grown by a W thin film-assisted heterogeneous nucleation process with NaCl as a structure-directing agent. The fabricated electrochromic device demonstrated optical modulation (coloration/bleaching) at 632.8 nm of ˜41.2% after applying a low voltage of 0.1 V for 10 s, indicating the potential of such nanorod array films for use in energy-saving smart windows.

  9. Direct Growth of Crystalline Tungsten Oxide Nanorod Arrays by a Hydrothermal Process and Their Electrochromic Properties

    NASA Astrophysics Data System (ADS)

    Lu, Chih-Hao; Hon, Min Hsiung; Leu, Ing-Chi

    2017-04-01

    Transparent crystalline tungsten oxide nanorod arrays for use as an electrochromic layer have been directly prepared on fluorine-doped tin oxide-coated glass via a facile tungsten film-assisted hydrothermal process using aqueous tungsten hexachloride solution. X-ray diffraction analysis and field-emission scanning electron microscopy were used to characterize the phase and morphology of the grown nanostructures. Arrays of tungsten oxide nanorods with diameter of ˜22 nm and length of ˜240 nm were obtained at 200°C after 8 h of hydrothermal reaction. We propose a growth mechanism for the deposition of the monoclinic tungsten oxide phase in the hydrothermal environment. The tungsten film was first oxidized to tungsten oxide to provide seed sites for crystal growth and address the poor connection between the growing tungsten oxide and substrate. Aligned tungsten oxide nanorod arrays can be grown by a W thin film-assisted heterogeneous nucleation process with NaCl as a structure-directing agent. The fabricated electrochromic device demonstrated optical modulation (coloration/bleaching) at 632.8 nm of ˜41.2% after applying a low voltage of 0.1 V for 10 s, indicating the potential of such nanorod array films for use in energy-saving smart windows.

  10. Extension of DAMAS Phased Array Processing for Spatial Coherence Determination (DAMAS-C)

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M., Jr.

    2006-01-01

    The present study reports a new development of the DAMAS microphone phased array processing methodology that allows the determination and separation of coherent and incoherent noise source distributions. In 2004, a Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decoupled the array design and processing influence from the noise being measured, using a simple and robust algorithm. In 2005, three-dimensional applications of DAMAS were examined. DAMAS has been shown to render an unambiguous quantitative determination of acoustic source position and strength. However, an underlying premise of DAMAS, as well as that of classical array beamforming methodology, is that the noise regions under study are distributions of statistically independent sources. The present development, called DAMAS-C, extends the basic approach to include coherence definition between noise sources. The solutions incorporate cross-beamforming array measurements over the survey region. While the resulting inverse problem can be large and the iteration solution computationally demanding, it solves problems no other technique can approach. DAMAS-C is validated using noise source simulations and is applied to airframe flap noise test results.

  11. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    PubMed Central

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  12. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-07-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  13. A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives

    PubMed Central

    Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai

    2013-01-01

    This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124

  14. Low frequency ultrasonic array imaging using signal post-processing for concrete material

    NASA Astrophysics Data System (ADS)

    Ozawa, Akio; Izumi, Hideki; Nakahata, Kazuyuki; Ohira, Katsumi; Ogawa, Kenzo

    2017-02-01

    The use of ultrasonic arrays to conduct nondestructive evaluation has significantly increased in recent years. A post-processing beam-forming technique that utilizes a complete set of signals of all combinations of transmission and reception el-ements was proposed as an array imaging technique. In this study, a delay-and-sum beam reconstruction method utilizing post-processing was applied to the imaging of internal voids and reinforced steel bars in concrete material. Due to the high attenuation of the ultrasonic wave in concrete, it is necessary to use an ultrasonic wave as the incident wave at low frequencies and high in-tensities. In this study, an array transducer with a total of 16 elements was designed on the basis of a multigaussian beam model. The center frequency of the transducer was 50 kHz, and low frequency imaging was achieved by performing computations using a graphics processing unit accelerators in the post-processing beam formation. The results indicated that the shapes of through holes and steel bars in a concrete specimen with 700 mm height were reconstructed with high resolution.

  15. Fabrication of hybrid nanostructured arrays using a PDMS/PDMS replication process.

    PubMed

    Hassanin, H; Mohammadkhani, A; Jiang, K

    2012-10-21

    In the study, a novel and low cost nanofabrication process is proposed for producing hybrid polydimethylsiloxane (PDMS) nanostructured arrays. The proposed process involves monolayer self-assembly of polystyrene (PS) spheres, PDMS nanoreplication, thin film coating, and PDMS to PDMS (PDMS/PDMS) replication. A self-assembled monolayer of PS spheres is used as the first template. Second, a PDMS template is achieved by replica moulding. Third, the PDMS template is coated with a platinum or gold layer. Finally, a PDMS nanostructured array is developed by casting PDMS slurry on top of the coated PDMS. The cured PDMS is peeled off and used as a replica surface. In this study, the influences of the coating on the PDMS topography, contact angle of the PDMS slurry and the peeling off ability are discussed in detail. From experimental evaluation, a thickness of at least 20 nm gold layer or 40 nm platinum layer on the surface of the PDMS template improves the contact angle and eases peeling off. The coated PDMS surface is successfully used as a template to achieve the replica with a uniform array via PDMS/PDMS replication process. Both the PDMS template and the replica are free of defects and also undistorted after demoulding with a highly ordered hexagonal arrangement. In addition, the geometry of the nanostructured PDMS can be controlled by changing the thickness of the deposited layer. The simplicity and the controllability of the process show great promise as a robust nanoreplication method for functional applications.

  16. Portable nuclear material detector and process

    DOEpatents

    Hofstetter, Kenneth J; Fulghum, Charles K; Harpring, Lawrence J; Huffman, Russell K; Varble, Donald L

    2008-04-01

    A portable, hand held, multi-sensor radiation detector is disclosed. The detection apparatus has a plurality of spaced sensor locations which are contained within a flexible housing. The detection apparatus, when suspended from an elevation, will readily assume a substantially straight, vertical orientation and may be used to monitor radiation levels from shipping containers. The flexible detection array can also assume a variety of other orientations to facilitate any unique container shapes or to conform to various physical requirements with respect to deployment of the detection array. The output of each sensor within the array is processed by at least one CPU which provides information in a usable form to a user interface. The user interface is used to provide the power requirements and operating instructions to the operational components within the detection array.

  17. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  18. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  19. Development of subminiature multi-sensor hot-wire probes

    NASA Technical Reports Server (NTRS)

    Westphal, Russell V.; Ligrani, Phillip M.; Lemos, Fred R.

    1988-01-01

    Limitations on the spatial resolution of multisensor hot wire probes have precluded accurate measurements of Reynolds stresses very near solid surfaces in wind tunnels and in many practical aerodynamic flows. The fabrication, calibration and qualification testing of very small single horizontal and X-array hot-wire probes which are intended to be used near solid boundaries in turbulent flows where length scales are particularly small, is described. Details of the sensor fabrication procedure are reported, along with information needed to successfully operate the probes. As compared with conventional probes, manufacture of the subminiature probes is more complex, requiring special equipment and careful handling. The subminiature probes tested were more fragile and shorter lived than conventional probes; they obeyed the same calibration laws but with slightly larger experimental uncertainty. In spite of these disadvantages, measurements of mean statistical quantities and spectra demonstrate the ability of the subminiature sensors to provide the measurements in the near wall region of turbulent boundary layers that are more accurate than conventional sized probes.

  20. Wind speed and direction measurement based on arc ultrasonic sensor array signal processing algorithm.

    PubMed

    Li, Xinbo; Sun, Haixin; Gao, Wei; Shi, Yaowu; Liu, Guojun; Wu, Yue

    2016-11-01

    This article investigates a kind of method to measure the wind speed and the wind direction, which is based on arc ultrasonic sensor array and combined with array signal processing algorithm. In the proposed method, a new arc ultrasonic array structure is introduced and the array manifold is derived firstly. On this basis, the measurement of the wind speed and the wind direction is analyzed and discussed by means of the basic idea of the classic MUSIC (Multiple Signal Classification) algorithm, which achieves the measurements of the 360° wind direction with resolution of 1° and 0-60m/s wind speed with resolution of 0.1m/s. The implementation of the proposed method is elaborated through the theoretical derivation and corresponding discussion. Besides, the simulation experiments are presented to show the feasibility of the proposed method. The theoretical analysis and simulation results indicate that the proposed method has superiority on anti-noise performance and improves the wind measurement accuracy.

  1. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  2. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  3. Monitoring and evaluation of alcoholic fermentation processes using a chemocapacitor sensor array.

    PubMed

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-09-02

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument.

  4. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    PubMed Central

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  5. An adaptive Hidden Markov model for activity recognition based on a wearable multi-sensor device.

    PubMed

    Li, Zhen; Wei, Zhiqiang; Yue, Yaofeng; Wang, Hao; Jia, Wenyan; Burke, Lora E; Baranowski, Thomas; Sun, Mingui

    2015-05-01

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based on multi-sensor data is presented. In order to utilize these data efficiently and overcome the big data problem, an offline adaptive-Hidden Markov Model (HMM) is proposed. A sensor selection scheme is implemented based on an improved Viterbi algorithm. A new method is proposed that incorporates personal experience into the HMM model as a priori information. Experiments are conducted using a personal wearable computer eButton consisting of multiple sensors. Our comparative study with the standard HMM and other alternative methods in processing the eButton data have shown that our method is more robust and efficient, providing a useful tool to evaluate human activity and lifestyle.

  6. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input

    PubMed Central

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  7. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    PubMed

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-09-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter.

  8. InGaAs/InP PIN photodetector arrays made by MOCVD based zinc diffusion processes

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad; Feng, J. Y.; Berkovich, Andrew; Abshire, Pamela; Barrows, Geoffrey; Choa, Fow-Sen

    2016-05-01

    InGaAs based long-wavelength near infrared detector arrays are very important for high dynamic range imaging operations seamlessly from daylight environments to dark environments. These detector devices are usually made by open-hole diffusion technique which has the advantage of lower leakage current and higher reliability. The diffusion process is usually done in a sealed quartz ampoule with dopant compounds like ZnP2, ZnAs3, CdP2 etc. side by side with semiconductor samples. The ampoule needs to be prepared and sealing process needs to be done in very clean environment and each time can have variations. In this work we demonstrated using MOCVD growth chamber to perform the diffusion process. The advantages of such a process are that the tool is constantly kept in ultra clean environment and can reproducibly provide clean processes without introducing unexpected defects. We can independently control the temperature and flow rate of the dopant - they are not linked as in the ampoule diffusion case. The process can be done on full wafers with good uniformity through substrate rotation, which is good for large detector array fabrications. We have fabricated different types of InGaAs/InP detector arrays using dimethyl zinc as the dopant source and PH3 or AsH3 for surface protection. Pre-studies of Zn-diffusion profiles in InGaAs and InP at different temperatures, flow rates, diffusion times and followed annealing times were conducted to obtain good control of the process. Grown samples were measured by C-V profilometer to evaluate the diffusion depth and doping concentration. The dependence of the diffusion profile with temperature, dopant partial pressures, and annealing temperature and time and some of the fabricated device characteristics are reported.

  9. An Asynchronous Multi-Sensor Micro Control Unit for Wireless Body Sensor Networks (WBSNs)

    PubMed Central

    Chen, Chiung-An; Chen, Shih-Lun; Huang, Hong-Yi; Luo, Ching-Hsing

    2011-01-01

    In this work, an asynchronous multi-sensor micro control unit (MCU) core is proposed for wireless body sensor networks (WBSNs). It consists of asynchronous interfaces, a power management unit, a multi-sensor controller, a data encoder (DE), and an error correct coder (ECC). To improve the system performance and expansion abilities, the asynchronous interface is created for handshaking different clock domains between ADC and RF with MCU. To increase the use time of the WBSN system, a power management technique is developed for reducing power consumption. In addition, the multi-sensor controller is designed for detecting various biomedical signals. To prevent loss error from wireless transmission, use of an error correct coding technique is important in biomedical applications. The data encoder is added for lossless compression of various biomedical signals with a compression ratio of almost three. This design is successfully tested on a FPGA board. The VLSI architecture of this work contains 2.68-K gate counts and consumes power 496-μW at 133-MHz processing rate by using TSMC 0.13-μm CMOS process. Compared with the previous techniques, this work offers higher performance, more functions, and lower hardware cost than other micro controller designs. PMID:22164000

  10. An asynchronous multi-sensor micro control unit for wireless body sensor networks (WBSNs).

    PubMed

    Chen, Chiung-An; Chen, Shih-Lun; Huang, Hong-Yi; Luo, Ching-Hsing

    2011-01-01

    In this work, an asynchronous multi-sensor micro control unit (MCU) core is proposed for wireless body sensor networks (WBSNs). It consists of asynchronous interfaces, a power management unit, a multi-sensor controller, a data encoder (DE), and an error correct coder (ECC). To improve the system performance and expansion abilities, the asynchronous interface is created for handshaking different clock domains between ADC and RF with MCU. To increase the use time of the WBSN system, a power management technique is developed for reducing power consumption. In addition, the multi-sensor controller is designed for detecting various biomedical signals. To prevent loss error from wireless transmission, use of an error correct coding technique is important in biomedical applications. The data encoder is added for lossless compression of various biomedical signals with a compression ratio of almost three. This design is successfully tested on a FPGA board. The VLSI architecture of this work contains 2.68-K gate counts and consumes power 496-μW at 133-MHz processing rate by using TSMC 0.13-μm CMOS process. Compared with the previous techniques, this work offers higher performance, more functions, and lower hardware cost than other micro controller designs.

  11. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  12. Batch-processed semiconductor gas sensor array for the selective detection of NOx in automotive exhaust gas

    NASA Astrophysics Data System (ADS)

    Jang, Hani; Kim, Minki; Kim, Yongjun

    2016-12-01

    This paper reports on a semiconductor gas sensor array to detect nitrogen oxides (NOx) in automotive exhaust gas. The proposed semiconductor gas sensor array consisted of one common electrode and three individual electrodes to minimize the size of the sensor array, and three sensing layers [TiO2 + SnO2 (15 wt%), SnO2, and Ga2O3] were deposited using screen printing. In addition, sensing materials were sintered under the same conditions in order to take advantage of batch processing. The sensing properties of the proposed sensor array were verified by experimental measurements, and the selectivity improved by using pattern recognition.

  13. Reversible Back-Propagation Imaging Algorithm for Post-Processing of Ultrasonic Array Data

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2009-03-01

    The paper describes a method for processing data from an ultrasonic transducer array. The proposed algorithm is formulated in such a way that it is reversible, i.e. the raw data set can be recovered from the image. This is of practical significance because it allows the raw-data to be spatially filtered using the image to extract, for example, only the raw data associated with a particular reflector. The method is tested on experimental data obtained from a commercial 64-element, 5-MHZ array on an aluminium specimen that contains a number of machined slots and side-drilled holes. The raw transmitter-receiver data corresponded to each reflector is extracted and the scattering matrices of different reflectors are reconstructed. It allows the signals from 1-mm-long slot and a 1-mm-diameter hole to be clearly distinguished and the orientation and the size of the slots to be determined.

  14. A high-resolution algorithm for wave number estimation using holographic array processing

    NASA Astrophysics Data System (ADS)

    Roux, Philippe; Cassereau, Didier; Roux, André

    2004-03-01

    This paper presents an original way to perform wave number inversion from simulated data obtained in a noisy shallow-water environment. In the studied configuration an acoustic source is horizontally towed with respect to a vertical hydrophone array. The inversion is achieved from the combination of three ingredients. First, a modified version of the Prony algorithm is presented and numerical comparison is made to another high-resolution wave number inversion algorithm based on the matrix-pencil technique. Second, knowing that these high-resolution algorithms are classically sensitive to noise, the use of a holographic array processing enables improvement of the signal-to-noise ratio before the inversion is performed. Last, particular care is taken in the representations of the solutions in the wave number space to improve resolution without suffering from aliasing. The dependence of this wave number inversion algorithm on the relevant parameters of the problem is discussed.

  15. Process development for automated solar cell and module production. Task 4: automated array assembly

    SciTech Connect

    Hagerty, J.J.

    1980-06-30

    The scope of work under this contract involves specifying a process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use. This process sequence is then critically analyzed from a technical and economic standpoint to determine the technological readiness of each process step for implementation. The process steps are ranked according to the degree of development effort required and according to their significance to the overall process. Under this contract the steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development. Economic analysis using the SAMICS system has been performed during these studies to assure that development efforts have been directed towards the ultimate goal of price reduction. Details are given. (WHK)

  16. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    NASA Astrophysics Data System (ADS)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  17. Distinctive Order Based Self-Similarity descriptor for multi-sensor remote sensing image matching

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Ebadi, Hamid

    2015-10-01

    Robust, well-distributed and accurate feature matching in multi-sensor remote sensing image is a difficult task duo to significant geometric and illumination differences. In this paper, a robust and effective image matching approach is presented for multi-sensor remote sensing images. The proposed approach consists of three main steps. In the first step, UR-SIFT (Uniform robust scale invariant feature transform) algorithm is applied for uniform and dense local feature extraction. In the second step, a novel descriptor namely Distinctive Order Based Self Similarity descriptor, DOBSS descriptor, is computed for each extracted feature. Finally, a cross matching process followed by a consistency check in the projective transformation model is performed for feature correspondence and mismatch elimination. The proposed method was successfully applied for matching various multi-sensor satellite images as: ETM+, SPOT 4, SPOT 5, ASTER, IRS, SPOT 6, QuickBird, GeoEye and Worldview sensors, and the results demonstrate its robustness and capability compared to common image matching techniques such as SIFT, PIIFD, GLOH, LIOP and LSS.

  18. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems.

    PubMed

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-04-03

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.

  19. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  20. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  1. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  2. Optoelectronic signal processing for phased-array antennas II; Proceedings of the Meeting, Los Angeles, CA, Jan. 16, 17, 1990

    NASA Astrophysics Data System (ADS)

    Hendrickson, Brian M.; Koepf, Gerhard A.

    Various papers on optoelectronic signal processing for phased-array antennas (PAAs) are presented. Individual topics addressed include: the dynamics of high-frequency lasers, an electrooptic phase modulator for PA applications, a laser mixer for microwave fiber optics, optical control of microwaves with III-V semiconductor optical waveguides, a high-dynamic-range modulator for microwave PAs, the high-modulation-rate potential of surface-emitter laser-diode arrays, an electrooptical switch for antenna beam steering, and adaptive PA radar processing using photorefractive crystals. Also discussed are an optical processor for array antenna beam shaping and steering, an integrated optical Butler matrix for beam forming in PAAs, an acoustooptic/photorefractive processor for adaptive antenna arrays, BER testing of fiber-optic data links for MMIC-based phased-array antennas, and the design of an optically controlled K(a)-band GaAs MMIC PAA.

  3. β-Decay Studies of r-Process Nuclei Using the Advanced Implantation Detector Array (AIDA)

    NASA Astrophysics Data System (ADS)

    Griffin, C. J.; Davinson, T.; Estrade, A.; Braga, D.; Burrows, I.; Coleman-Smith, P. J.; Grahn, T.; Grant, A.; Harkness-Brennan, L. J.; Kiss, G.; Kogimtzis, M.; Lazarus, I. H.; Letts, S. C.; Liu, Z.; Lorusso, G.; Matsui, K.; Nishimura, S.; Page, R. D.; Prydderch, M.; Phong, V. H.; Pucknell, V. F. E.; Rinta-Antila, S.; Roberts, O. J.; Seddon, D. A.; Simpson, J.; Thomas, S. L.; Woods, P. J.

    Thought to produce around half of all isotopes heavier than iron, the r-process is a key mechanism for nucleosynthesis. However, a complete description of the r-process is still lacking and many unknowns remain. Experimental determination of β-decay half-lives and β-delayed neutron emission probabilities along the r-process path would help to facilitate a greater understanding of this process. The Advanced Implantation Detector Array (AIDA) represents the latest generation of silicon implantation detectors for β-decay studies with fast radioactive ion beams. Preliminary results from commissioning experiments demonstrate successful operation of AIDA and analysis of the data obtained during the first official AIDA experiments is now under-way.

  4. Multisensor Target Detection And Classification

    NASA Astrophysics Data System (ADS)

    Ruck, Dennis W.; Rogers, Steven K.; Mills, James P.; Kabrisky, Matthew

    1988-08-01

    In this paper a new approach to the detection and classification of tactical targets using a multifunction laser radar sensor is developed. Targets of interest are tanks, jeeps, trucks, and other vehicles. Doppler images are segmented by developing a new technique which compensates for spurious doppler returns. Relative range images are segmented using an approach based on range gradients. The resultant shapes in the segmented images are then classified using Zernike moment invariants as shape descriptors. Two classification decision rules are implemented: a classical statistical nearest-neighbor approach and a multilayer perceptron architecture. The doppler segmentation algorithm was applied to a set of 180 real sensor images. An accurate segmentation was obtained for 89 percent of the images. The new doppler segmentation proved to be a robust method, and the moment invariants were effective in discriminating the tactical targets. Tanks were classified correctly 86 percent of the time. The most important result of this research is the demonstration of the use of a new information processing architecture for image processing applications.

  5. Horizontal Estimation and Information Fusion in Multitarget and Multisensor Environments

    DTIC Science & Technology

    1987-09-01

    performance evaluation of a multisensor configuration. 5. The on-line management /control of an implemented multisensors network. 6. Provision of software...distributed. Several problems arise in the analysis and design of the mentioned architectures, among them are: a. Air-Space management . i.e. allotment of...airspace sectors to the sensors of the network. b. Gathering, routing, management and dissemination of data and results through the communication network

  6. Array Processing and Forward Modeling Methods for the Analysis of Stiffened, Fluid-Loaded Cylindrical Shells.

    NASA Astrophysics Data System (ADS)

    Bondaryk, Joseph E.

    This thesis investigates array processing and forward modeling methods for the analysis of experimental, structural acoustic data to understand wave propagation on fluid-loaded, elastic, cylindrical shells in the mid -frequency range, 2 < ka < 12. The transient, acoustic, in-plane, bistatic scattering response to wideband, plane waves at various angles of incidence was collected by a synthetic array for three shells, a finite, air-filled, empty thin shell, a duplicate shell stiffened with four unequally spaced ring-stiffeners and a duplicate ribbed shell augmented by resiliently-mounted, wave-bearing, internal structural elements. Array and signal processing techniques, including source deconvolution, array weighting, conventional focusing and the removal of the geometrically scattered contribution, are used to transform the collected data to a more easily interpreted representation. The resulting waveforms show that part of the transient, dynamic, structural response of the shell surface which is capable of radiating to the far field. Compressional membrane waves are directly observable in this representation and evidence of flexural membrane waves is present. Comparisons between the shells show energy compartmentalized by the ring stiffeners and coupled into the wave-bearing internals. Energy calculations show a decay rate of 30dB/msec due to radiation for the Empty shell but only 10dB/msec for the other shells at bow incidence. The Radon Transform is used to estimate the reflection coefficient of compressional waves at the shell endcap as 0.2. The measurement array does not provide enough resolution to allow use of this technique to determine the reflection, transmission and coupling coefficients at the ring stiffeners. Therefore, a forward modeling technique is used to further analyze the 0^ circ incidence case. This modeling couples a Transmission Line model of the shell with a Simulated Annealing approach to multi-dimensional, parameter estimation. This

  7. An Evaluation of Signal Processing Tools for Improving Phased Array Ultrasonic Weld Inspection

    SciTech Connect

    Ramuhalli, Pradeep; Cinson, Anthony D.; Crawford, Susan L.; Harris, Robert V.; Diaz, Aaron A.; Anderson, Michael T.

    2011-03-24

    Cast austenitic stainless steel (CASS) commonly used in U.S. nuclear power plants is a coarse-grained, elastically anisotropic material. The coarse-grained nature of CASS makes ultrasonic inspection of in-service components difficult. Recently, low-frequency phased array ultrasound has emerged as a candidate for the CASS piping weld inspection. However, issues such as low signal-to-noise ratio and difficulty in discriminating between flaw and non-flaw signals remain. This paper discusses the evaluation of a number of signal processing algorithms for improving flaw detection in CASS materials. The full paper provides details of the algorithms being evaluated, along with preliminary results.

  8. Evaluation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Array

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula

    2006-01-01

    The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.

  9. Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.

  10. The Role of Water Vapor and Dissociative Recombination Processes in Solar Array Arc Initiation

    NASA Technical Reports Server (NTRS)

    Galofar, J.; Vayner, B.; Degroot, W.; Ferguson, D.

    2002-01-01

    Experimental plasma arc investigations involving the onset of arc initiation for a negatively biased solar array immersed in low-density plasma have been performed. Previous studies into the arc initiation process have shown that the most probable arcing sites tend to occur at the triple junction involving the conductor, dielectric and plasma. More recently our own experiments have led us to believe that water vapor is the main causal factor behind the arc initiation process. Assuming the main component of the expelled plasma cloud by weight is water, the fastest process available is dissociative recombination (H2O(+) + e(-) (goes to) H* + OH*). A model that agrees with the observed dependency of arc current pulse width on the square root of capacitance is presented. A 400 MHz digital storage scope and current probe was used to detect arcs at the triple junction of a solar array. Simultaneous measurements of the arc trigger pulse, the gate pulse, the arc current and the arc voltage were then obtained. Finally, a large number of measurements of individual arc spectra were obtained in very short time intervals, ranging from 10 to 30 microseconds, using a 1/4 a spectrometer coupled with a gated intensified CCD. The spectrometer was systematically tuned to obtain optical arc spectra over the entire wavelength range of 260 to 680 nanometers. All relevant atomic lines and molecular bands were then identified.

  11. SST dual-mirror telescope for Cherenkov Telescope Array: an innovative mirror manufacturing process

    NASA Astrophysics Data System (ADS)

    Dumas, Delphine; Huet, Jean-Michel; Dournaux, Jean-Laurent; Laporte, Philippe; Rulten, Cameron; Schmoll, Jurgen; Sol, Hélène; Sayède, Frédéric; Micolon, Patrice; Glicenstein, Jean-François; Peyaud, Bernard

    2014-07-01

    The Observatoire de Paris is constructing a prototype Small-Sized Telescope (SST) for the Cherenkov Telescope Array (CTA), named SST-GATE, based on the dual-mirror Schwarzschild-Couder optical design. Considering the mirrors size and its specific curvature and the optical requirements for the Cherenkov imaging telescope, a non-conventional process has been used for designing and manufacturing the mirrors of the SST-GATE prototype. Based on machining, polishing and coating of aluminium bulk samples, this process has been validated by simulation and tests that will be detailed in this paper after a discussion on the Schwarzschild-Couder optical design which so far has never been used to design ground based telescopes. Even if the SST-GATE is a prototype for small size telescopes of the CTA array, the primary mirror of the telescope is 4 meters diameter, and it has to be segmented. Due to the dual-mirror configuration, the alignment is a complex task that needs a well defined and precise process that will be discussed in this paper.

  12. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Technical Reports Server (NTRS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-01-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN(sub x)) materials and microwave structures, and the resulting performance improvements.

  13. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  14. Multisensor data fusion for IED threat detection

    NASA Astrophysics Data System (ADS)

    Mees, Wim; Heremans, Roel

    2012-10-01

    In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.

  15. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    NASA Astrophysics Data System (ADS)

    Yan, Wensheng; Cumming, Benjamin P.; Gu, Min

    2015-07-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values.

  16. Post-Processing of the Full Matrix of Ultrasonic Transmit-Receive Array Data for Guided Wave Pipe Inspection

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2009-03-01

    The paper describes a method for processing data from a guided wave transducer array on a pipe. The raw data set from such an array contains the full matrix of time-domain signals from each transmitter-receiver combination. It is shown that for certain configurations of an array the total focusing method can be applied which allows the array to be focused at every point on a pipe surface in both transmission and reception. The effect of array configuration parameters on the sensitivity of the proposed method to the random and coherent noise is discussed. Experimental results are presented using electromagnetic acoustic transducers (EMAT) for exciting and detecting the S0 Lamb wave mode in a 12 inch steel pipe at 200 kHz excitation frequency. The results show that using the imaging algorithm a 2-mm-diameter (0.08 wavelength) half-thickness hole can be detected.

  17. Formation process of TiO2 nanotube arrays prepared by anodic oxidation method.

    PubMed

    Li, Hongyi; Liu, Man; Wang, Hong; Wu, Junshu; Su, Penglei; Li, Dasheng; Wang, Jinshu

    2013-06-01

    TiO2 nanotube array thin films have great potential in many fields, such as solar cell, photo catalyst, photo-induced cathodic protection for metals and bioactivity. In order to investigate the formation process of the TiO2 nanotube array thin films, the EIS spectrum and current density were measured during the anodic oxidation. The results showed that the formation process could be divided into four stages. The current density decreased sharply at the first stage, and then increased at the second stage, followed by declining and finally remained constant value. In addition, the current density increased with the anodic voltage. The EIS spectrum varied in different stage. The simulated circuit was composed three sections, the first sections indicated the resistance of the electrolyte, the second one gave the double layer structure between the electrolyte and titanium electrode, the third one was a inductive loop, which represented the anions absorbed on the surface of the TiO2 nanotube's wall. The more cations were absorbed, the higher value of the inductive loop would be. The EIS results showed that the value increased with the outer voltage, which means that more cations were absorbed under the higher anodic voltage.

  18. Correlation of lattice defects and thermal processing in the crystallization of titania nanotube arrays

    NASA Astrophysics Data System (ADS)

    Hosseinpour, Pegah M.; Yung, Daniel; Panaitescu, Eugen; Heiman, Don; Menon, Latika; Budil, David; Lewis, Laura H.

    2014-12-01

    Titania nanotubes have the potential to be employed in a wide range of energy-related applications such as solar energy-harvesting devices and hydrogen production. As the functionality of titania nanostructures is critically affected by their morphology and crystallinity, it is necessary to understand and control these factors in order to engineer useful materials for green applications. In this study, electrochemically-synthesized titania nanotube arrays were thermally processed in inert and reducing environments to isolate the role of post-synthesis processing conditions on the crystallization behavior, electronic structure and morphology development in titania nanotubes, correlated with the nanotube functionality. Structural and calorimetric studies revealed that as-synthesized amorphous nanotubes crystallize to form the anatase structure in a three-stage process that is facilitated by the creation of structural defects. It is concluded that processing in a reducing gas atmosphere versus in an inert environment provides a larger unit cell volume and a higher concentration of Ti3+ associated with oxygen vacancies, thereby reducing the activation energy of crystallization. Further, post-synthesis annealing in either reducing or inert atmospheres produces pronounced morphological changes, confirming that the nanotube arrays thermally transform into a porous morphology consisting of a fragmented tubular architecture surrounded by a network of connected nanoparticles. This study links explicit data concerning morphology, crystallization and defects, and shows that the annealing gas environment determines the details of the crystal structure, the electronic structure and the morphology of titania nanotubes. These factors, in turn, impact the charge transport and consequently the functionality of these nanotubes as photocatalysts.

  19. Parallel pipeline networking and signal processing with field-programmable gate arrays (FPGAs) and VCSEL-MSM smart pixels

    NASA Astrophysics Data System (ADS)

    Kuznia, C. B.; Sawchuk, Alexander A.; Zhang, Liping; Hoanca, Bogdan; Hong, Sunkwang; Min, Chris; Pansatiankul, Dhawat E.; Alpaslan, Zahir Y.

    2000-05-01

    We present a networking and signal processing architecture called Transpar-TR (Translucent Smart Pixel Array-Token- Ring) that utilizes smart pixel technology to perform 2D parallel optical data transfer between digital processing nodes. Transpar-TR moves data through the network in the form of 3D packets (2D spatial and 1D time). By utilizing many spatial parallel channels, Transpar-TR can achieve high throughput, low latency communication between nodes, even with each channel operating at moderate data rates. The 2D array of optical channels is created by an array of smart pixels, each with an optical input and optical output. Each smart pixel consists of two sections, an optical network interface and ALU-based processor with local memory. The optical network interface is responsible for transmitting and receiving optical data packets using a slotted token ring network protocol. The smart pixel array operates as a single-instruction multiple-data processor when processing data. The Transpar-TR network, consisting of networked smart pixel arrays, can perform pipelined parallel processing very efficiently on 2D data structures such as images and video. This paper discusses the Transpar-TR implementation in which each node is the printed circuit board integration of a VCSEL-MSM chip, a transimpedance receiver array chip and an FPGA chip.

  20. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  1. The HgI sub 2 energy dispersive x-ray array detectors and minaturized processing electronics project

    SciTech Connect

    Iwanczyk, J.S.; Dorri, N.; Wang, M.; Szawlowski . Inst. of Physics); Patt, W.K. ); Hedman, B.; Hodgson, K.O. . Stanford Synchrotron Radiation Lab.)

    1990-04-01

    This paper describes recent progress in the development of HgI{sub 2} energy dispersive x-ray detector arrays for synchrotron radiation research and their associated miniaturized processing electronics. Deploying a 5 element HgI{sub 2} array detector under realistic operating conditions at SSRL, an energy resolution of 252 eV FWHM at 5.9 keV (Mn-K{alpha}) was obtained. The authors also report energy resolution and throughput measurements versus input count rate. The results from the HgI{sub 2} system are then compared to those obtained under identical conditions from a commercial 13 element Ge detector array.

  2. MTS in false positive reduction for multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Gosnell, Michael; Cudney, Elizabeth

    2014-05-01

    The Mahalanobis Taguchi System (MTS) is a relatively new tool in the vehicle health maintenance domain, but has some distinct advantages in current multi-sensor implementations. The use of Mahalanobis Spaces (MS) allows the algorithm to identify characteristics of sensor signals to identify behaviors in machines. MTS is extremely powerful with the caveat that the correct variables are selected to form the MS. In this research work, 56 sensors monitor various aspects of the vehicles. Typically, using the MTS process, identification of useful variables is preceded by validation of the measurements scale. However, the MTS approach doesn't directly include any mitigating steps should the measurement scale not be validated. Existing work has performed outlier removal in construction of the MS, which can lead to better validation. In our approach, we modify the outlier removal process with more liberal definitions of outliers to better identify variables' impact prior to identification of useful variables. This subtle change substantially lowered the false positive rate due to the fact that additional variables were retained. Traditional MTS approaches identify useful variables only to the extent they provide usefulness in identifying the positive (abnormal) condition. The impact of removing false negatives is not included. Initial results show our approach can reduce false positive values while still maintaining complete fault identification for this vehicle data set.

  3. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  4. A hierarachical data structure representation for fusing multisensor information

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  5. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  6. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  7. Range-dependent geoacoustic inversion of vertical line array data using matched beam processing.

    PubMed

    Kim, Kyungseop; Seong, Woojae; Lee, Keunhwa; Kim, Seongil; Shim, Taebo

    2009-02-01

    This paper describes the results of range-dependent geoacoustic inversion using vertical line array data obtained from the 4th Matched Acoustic Properties and Localization Experiment conducted in the East Sea of Korea. The narrowband multitone continuous-wave signal from the towed source was analyzed to estimate the range-dependent geoacoustic properties along the radial track. The primary approach is based on the sectorwise inversion scheme. The inversion region up to 7.5 km from the vertical line array was divided into several segments, and the subinversions for each segment were performed sequentially. To reduce the dominance of low-angle arrivals, which bears little information for the bottom segment in question, matched beam processing with beam filtering was used for the cost function. The performance of proposed algorithm was tested using simulated data for an environment representative of the experimental site. The inversion results for the experimental data were consistent with the geophysical database and were validated from matched-field source localization using frequencies different from those used in the inversion.

  8. Ultrasound Nondestructive Evaluation (NDE) Imaging with Transducer Arrays and Adaptive Processing

    PubMed Central

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated. PMID:22368457

  9. Ultrasound nondestructive evaluation (NDE) imaging with transducer arrays and adaptive processing.

    PubMed

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated.

  10. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    PubMed Central

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-01-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems. PMID:19045902

  11. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    NASA Astrophysics Data System (ADS)

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-11-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems.

  12. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  13. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing

  14. Inhibition of clot formation in deterministic lateral displacement arrays for processing large volumes of blood for rare cell capture.

    PubMed

    D'Silva, Joseph; Austin, Robert H; Sturm, James C

    2015-05-21

    Microfluidic deterministic lateral displacement (DLD) arrays have been applied for fractionation and analysis of cells in quantities of ~100 μL of blood, with processing of larger quantities limited by clogging in the chip. In this paper, we (i) demonstrate that this clogging phenomenon is due to conventional platelet-driven clot formation, (ii) identify and inhibit the two dominant biological mechanisms driving this process, and (iii) characterize how further reductions in clot formation can be achieved through higher flow rates and blood dilution. Following from these three advances, we demonstrate processing of 14 mL equivalent volume of undiluted whole blood through a single DLD array in 38 minutes to harvest PC3 cancer cells with ~86% yield. It is possible to fit more than 10 such DLD arrays on a single chip, which would then provide the capability to process well over 100 mL of undiluted whole blood on a single chip in less than one hour.

  15. A laser-assisted process to produce patterned growth of vertically aligned nanowire arrays for monolithic microwave integrated devices.

    PubMed

    Kerckhoven, Vivien Van; Piraux, Luc; Huynen, Isabelle

    2016-06-10

    An experimental process for the fabrication of microwave devices made of nanowire arrays embedded in a dielectric template is presented. A pulse laser process is used to produce a patterned surface mask on alumina templates, defining precisely the wire growing areas during electroplating. This technique makes it possible to finely position multiple nanowire arrays in the template, as well as produce large areas and complex structures, combining transmission line sections with various nanowire heights. The efficiency of this process is demonstrated through the realisation of a microstrip electromagnetic band-gap filter and a substrate-integrated waveguide.

  16. Enhanced Processing for a Towed Array Using an Optimal Noise Canceling Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2005-07-21

    Noise self-generated by a surface ship towing an array in search of a weak target presents a major problem for the signal processing especially if broadband techniques are being employed. In this paper we discuss the development and application of an adaptive noise canceling processor capable of extracting the weak far-field acoustic target in a noisy ocean acoustic environment. The fundamental idea for this processor is to use a model-based approach incorporating both target and ship noise. Here we briefly describe the underlying theory and then demonstrate through simulation how effective the canceller and target enhancer perform. The adaptivity of the processor not only enables the ''tracking'' of the canceller coefficients, but also the estimation of target parameters for localization. This approach which is termed ''joint'' cancellation and enhancement produces the optimal estimate of both in a minimum (error) variance sense.

  17. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing.

    PubMed

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-08-18

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as "multiple-missed detection", and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis.

  18. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing

    PubMed Central

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-01-01

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as “multiple-missed detection”, and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis. PMID:26295232

  19. Multisensor based robotic manipulation in an uncalibrated manufacturing workcell

    SciTech Connect

    Ghosh, B.K.; Xiao, Di; Xi, Ning; Tarn, Tzyh-Jong

    1997-12-31

    The main problem that we address in this paper is how a robot manipulator is able to track and grasp a part placed arbitrarily on a moving disc conveyor aided by a single CCD camera and fusing information from encoders placed on the conveyor and also from encoders on the robot manipulator. The important assumption that distinguishes our work from what has been previously reported in the literature is that the position and orientation of the camera and the base frame of the robot is apriori assumed to be unknown and is `visually calibrated` during the operation of the manipulator. Moreover the part placed on the conveyor is assumed to be non-planar, i.e. the feature points observed on the part is assumed to be located arbitrarily in IR{sup 3}. The novelties of the proposed approach in this paper includes a (i) multisensor fusion scheme based on complementary data for the purpose of part localization, and (ii) self-calibration between the turntable and the robot manipulator using visual data and feature points on the end-effector. The principle advantages of the proposed scheme are the following. (i) It renders possible to reconfigure a manufacturing workcell without recalibrating the relation between the turntable and the robot. This significantly shortens the setup time of the workcell. (ii) It greatly weakens the requirement on the image processing speed.

  20. Adaptive multi-sensor integration for mine detection

    SciTech Connect

    Baker, J.E.

    1997-05-01

    State-of-the-art in multi-sensor integration (MSI) application involves extensive research and development time to understand and characterize the application domain; to determine and define the appropriate sensor suite; to analyze, characterize, and calibrate the individual sensor systems; to recognize and accommodate the various sensor interactions; and to develop and optimize robust merging code. Much of this process can benefit from adaptive learning, i.e., an output-based system can take raw sensor data and desired merged results as input and adaptively develop/determine an effective method if interpretation and merger. This approach significantly reduces the time required to apply MSI to a given application, while increasing the quality of the final result and provides a quantitative measure for comparing competing MSI techniques and sensor suites. The ability to automatically develop and optimize MSI techniques for new sensor suites and operating environments makes this approach well suited to the detection of mines and mine-like targets. Perhaps more than any other, this application domain is characterized by diverse, innovative, and dynamic sensor suites, whose nature and interactions are not yet well established. This paper presents such an outcome-based multi-image analysis system. An empirical evaluation of its performance and its application, sensor and domain robustness is presented.

  1. Efficient architecture for a multichannel array subbanding system with adaptive processing

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Nguyen, Huy T.

    2000-11-01

    An architecture is presented for front-end processing in a wideband array system which samples real signals. Such a system may be encountered in cellular telephony, radar, or low SNR digital communications receivers. The subbanding of data enables system data rate reduction, and creates a narrowband condition for adaptive processing within the subbands. The front-end performs passband filtering, equalization, subband decomposition and adaptive beamforming. The subbanding operation is efficiently implemented using a prototype lowpass finite impulse response (FIR) filter, decomposed into polyphase form, combined with a Fast Fourier Transform (FFT) block and a bank of modulating postmultipliers. If the system acquires real inputs, a single FFT may be used to operate on two channels, but a channel separation network is then required for recovery of individual channel data. A sequence of steps is described based on data transformation techniques that enables a maximally efficient implementation of the processing stages and eliminates the need for channel separation. Operation count is reduced, and several layers of processing are eliminated.

  2. Optoelectronic parallel processing with smart pixel arrays for automated screening of cervical smear imagery

    NASA Astrophysics Data System (ADS)

    Metz, John Langdon

    2000-10-01

    This thesis investigates the use of optoelectronic parallel processing systems with smart photosensor arrays (SPAs) to examine cervical smear images. The automation of cervical smear screening seeks to reduce human workload and improve the accuracy of detecting pre- cancerous and cancerous conditions. Increasing the parallelism of image processing improves the speed and accuracy of locating regions-of-interest (ROI) from images of the cervical smear for the first stage of a two-stage screening system. The two-stage approach first detects ROI optoelectronically before classifying them using more time consuming electronic algorithms. The optoelectronic hit/miss transform (HMT) is computed using gray scale modulation spatial light modulators in an optical correlator. To further the parallelism of this system, a novel CMOS SPA computes the post processing steps required by the HMT algorithm. The SPA reduces the subsequent bandwidth passed into the second, electronic image processing stage classifying the detected ROI. Limitations in the miss operation of the HMT suggest using only the hit operation for detecting ROI. This makes possible a single SPA chip approach using only the hit operation for ROI detection which may replace the optoelectronic correlator in the screening system. Both the HMT SPA postprocessor and the SPA ROI detector design provide compact, efficient, and low-cost optoelectronic solutions to performing ROI detection on cervical smears. Analysis of optoelectronic ROI detection with electronic ROI classification shows these systems have the potential to perform at, or above, the current error rates for manual classification of cervical smears.

  3. Alternative post-processing on a CMOS chip to fabricate a planar microelectrode array.

    PubMed

    López-Huerta, Francisco; Herrera-May, Agustín L; Estrada-López, Johan J; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+ -type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications.

  4. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  5. Multi-Sensor Aerosol Products Sampling System

    NASA Technical Reports Server (NTRS)

    Petrenko, M.; Ichoku, C.; Leptoukh, G.

    2011-01-01

    Global and local properties of atmospheric aerosols have been extensively observed and measured using both spaceborne and ground-based instruments, especially during the last decade. Unique properties retrieved by the different instruments contribute to an unprecedented availability of the most complete set of complimentary aerosol measurements ever acquired. However, some of these measurements remain underutilized, largely due to the complexities involved in analyzing them synergistically. To characterize the inconsistencies and bridge the gap that exists between the sensors, we have established a Multi-sensor Aerosol Products Sampling System (MAPSS), which consistently samples and generates the spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of aerosol products from multiple spacebome sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS. Samples of satellite aerosol products are extracted over Aerosol Robotic Network (AERONET) locations as well as over other locations of interest such as those with available ground-based aerosol observations. In this way, MAPSS enables a direct cross-characterization and data integration between Level-2 aerosol observations from multiple sensors. In addition, the available well-characterized co-located ground-based data provides the basis for the integrated validation of these products. This paper explains the sampling methodology and concepts used in MAPSS, and demonstrates specific examples of using MAPSS for an integrated analysis of multiple aerosol products.

  6. An Integrated Model for Robust Multisensor Data Fusion

    PubMed Central

    Shen, Bo; Liu, Yun; Fu, Jun-Song

    2014-01-01

    This paper presents an integrated model aimed at obtaining robust and reliable results in decision level multisensor data fusion applications. The proposed model is based on the connection of Dempster-Shafer evidence theory and an extreme learning machine. It includes three main improvement aspects: a mass constructing algorithm to build reasonable basic belief assignments (BBAs); an evidence synthesis method to get a comprehensive BBA for an information source from several mass functions or experts; and a new way to make high-precision decisions based on an extreme learning machine (ELM). Compared to some universal classification methods, the proposed one can be directly applied in multisensor data fusion applications, but not only for conventional classifications. Experimental results demonstrate that the proposed model is able to yield robust and reliable results in multisensor data fusion problems. In addition, this paper also draws some meaningful conclusions, which have significant implications for future studies. PMID:25340445

  7. Real-time multi-sensor based vehicle detection using MINACE filters

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj; Nehemiah, Avinash

    2007-04-01

    A system to detect vehicles (cars, trucks etc) in electro-optic (EO) and infrared (IR) imagery is presented. We present the use of the minimum noise and correlation (MINACE) distortion invariant filter (DIF) for this problem. The selection of the MINACE filter parameter c is automated using a training and validation set. A new set of correlation plane post processing methods that improve detection accuracies and reduce false alarms are presented. The system is tested on real life imagery of traffic in parking lots and roads obtained using a multi-sensor EO/IR platform.

  8. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  9. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation

    PubMed Central

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-hong; Zhang, Hong; Sun, Mingui

    2015-01-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle. PMID:26185409

  10. An SOI CMOS-Based Multi-Sensor MEMS Chip for Fluidic Applications.

    PubMed

    Mansoor, Mohtashim; Haneef, Ibraheem; Akhtar, Suhail; Rafiq, Muhammad Aftab; De Luca, Andrea; Ali, Syed Zeeshan; Udrea, Florin

    2016-11-04

    An SOI CMOS multi-sensor MEMS chip, which can simultaneously measure temperature, pressure and flow rate, has been reported. The multi-sensor chip has been designed keeping in view the requirements of researchers interested in experimental fluid dynamics. The chip contains ten thermodiodes (temperature sensors), a piezoresistive-type pressure sensor and nine hot film-based flow rate sensors fabricated within the oxide layer of the SOI wafers. The silicon dioxide layers with embedded sensors are relieved from the substrate as membranes with the help of a single DRIE step after chip fabrication from a commercial CMOS foundry. Very dense sensor packing per unit area of the chip has been enabled by using technologies/processes like SOI, CMOS and DRIE. Independent apparatuses were used for the characterization of each sensor. With a drive current of 10 µA-0.1 µA, the thermodiodes exhibited sensitivities of 1.41 mV/°C-1.79 mV/°C in the range 20-300 °C. The sensitivity of the pressure sensor was 0.0686 mV/(Vexcit kPa) with a non-linearity of 0.25% between 0 and 69 kPa above ambient pressure. Packaged in a micro-channel, the flow rate sensor has a linearized sensitivity of 17.3 mV/(L/min)(-0.1) in the tested range of 0-4.7 L/min. The multi-sensor chip can be used for simultaneous measurement of fluid pressure, temperature and flow rate in fluidic experiments and aerospace/automotive/biomedical/process industries.

  11. An SOI CMOS-Based Multi-Sensor MEMS Chip for Fluidic Applications †

    PubMed Central

    Mansoor, Mohtashim; Haneef, Ibraheem; Akhtar, Suhail; Rafiq, Muhammad Aftab; De Luca, Andrea; Ali, Syed Zeeshan; Udrea, Florin

    2016-01-01

    An SOI CMOS multi-sensor MEMS chip, which can simultaneously measure temperature, pressure and flow rate, has been reported. The multi-sensor chip has been designed keeping in view the requirements of researchers interested in experimental fluid dynamics. The chip contains ten thermodiodes (temperature sensors), a piezoresistive-type pressure sensor and nine hot film-based flow rate sensors fabricated within the oxide layer of the SOI wafers. The silicon dioxide layers with embedded sensors are relieved from the substrate as membranes with the help of a single DRIE step after chip fabrication from a commercial CMOS foundry. Very dense sensor packing per unit area of the chip has been enabled by using technologies/processes like SOI, CMOS and DRIE. Independent apparatuses were used for the characterization of each sensor. With a drive current of 10 µA–0.1 µA, the thermodiodes exhibited sensitivities of 1.41 mV/°C–1.79 mV/°C in the range 20–300 °C. The sensitivity of the pressure sensor was 0.0686 mV/(Vexcit kPa) with a non-linearity of 0.25% between 0 and 69 kPa above ambient pressure. Packaged in a micro-channel, the flow rate sensor has a linearized sensitivity of 17.3 mV/(L/min)−0.1 in the tested range of 0–4.7 L/min. The multi-sensor chip can be used for simultaneous measurement of fluid pressure, temperature and flow rate in fluidic experiments and aerospace/automotive/biomedical/process industries. PMID:27827904

  12. Fabrication of dense non-circular nanomagnetic device arrays using self-limiting low-energy glow-discharge processing.

    PubMed

    Zheng, Zhen; Chang, Long; Nekrashevich, Ivan; Ruchhoeft, Paul; Khizroev, Sakhrat; Litvinov, Dmitri

    2013-01-01

    We describe a low-energy glow-discharge process using reactive ion etching system that enables non-circular device patterns, such as squares or hexagons, to be formed from a precursor array of uniform circular openings in polymethyl methacrylate, PMMA, defined by electron beam lithography. This technique is of a particular interest for bit-patterned magnetic recording medium fabrication, where close packed square magnetic bits may improve its recording performance. The process and results of generating close packed square patterns by self-limiting low-energy glow-discharge are investigated. Dense magnetic arrays formed by electrochemical deposition of nickel over self-limiting formed molds are demonstrated.

  13. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  14. Investigation of Proposed Process Sequence for the Array Automated Assembly Task, Phase 2. [low cost silicon solar array fabrication

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Bunyan, S.; Pepe, A.

    1979-01-01

    The technological readiness of the proposed process sequence was reviewed. Process steps evaluated include: (1) plasma etching to establish a standard surface; (2) forming junctions by diffusion from an N-type polymeric spray-on source; (3) forming a p+ back contact by firing a screen printed aluminum paste; (4) forming screen printed front contacts after cleaning the back aluminum and removing the diffusion oxide; (5) cleaning the junction by a laser scribe operation; (6) forming an antireflection coating by baking a polymeric spray-on film; (7) ultrasonically tin padding the cells; and (8) assembling cell strings into solar circuits using ethylene vinyl acetate as an encapsulant and laminating medium.

  15. Biologically inspired large scale chemical sensor arrays and embedded data processing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  16. A fast processing route of aspheric polydimethylsiloxane lenses array (APLA) and optical characterization for smartphone microscopy

    NASA Astrophysics Data System (ADS)

    Fuh, Yiin-Kuen; Lai, Zheng-Hong

    2017-02-01

    A fast processing route of aspheric polydimethylsiloxane (PDMS) lenses array (APLA) is proposed via the combined effect of inverted gravitational and heat-assisted forces. The fabrication time can be dramatically reduced to 30 s, compared favorably to the traditional duration of 2 hours of repeated cycles of addition-curing processes. In this paper, a low-cost flexible lens can be fabricated by repeatedly depositing, inverting, curing a hanging transparent PDMS elastomer droplet on a previously deposited curved structure. Complex structures with aspheric curve features and various focal lengths can be successfully produced and the fabricated 4 types of APLA have various focal lengths in the range of 7.03 mm, 6.00 mm, 5.33 mm, and 4.43 mm, respectively. Empirically, a direct relationship between the PDMS volume and focal lengths of the lenses can be experimentally deducted. Using these fabricated APLA, an ordinary commercial smartphone camera can be easily transformed to a low-cost, portable digital microscopy (50×magnification) such that point of care diagnostic can be implemented pervasively.

  17. Preconditioning of real-time optical Wiener filters for array processing

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan; Paparao, Palacharla

    1992-07-01

    In adaptive array processors, a performance measure, such as mean square error or signal to noise ration, coverages to the optimum Wiener solution starting from an initial setting. The choice of adaptive algorithms to solve the Wiener filtering problem is mainly guided by the desired processing time. In an optical realization for direct calculation of the optimum weights the covariance matrix and vector for a Wiener filter are computed at a high speed on acousto-optic processors. The resulting linear system of equations can be solved on an iterative optical processor. The matrix and vector data should be recomputed in every iteration for better tracking and adaptation. This introduces variations in their values due to the time-varying jamming and interference noise and the optical errors and noise. Time variant steepest descent algorithm is a simple method that converges to the common solution. In this paper, we describe a real-time preconditioning technique for such nonstationary iterative methods. Preconditioning will progressively lower the condition number of each matrix in the sequence, thereby improving the convergence speed and accuracy of the solution. This preconditioning process involves matrix-matrix multiplications that can be performed at high speed on parallel optical processors. Results of simulations illustrate the superlinear convergence obtained from preconditioning.

  18. The nonuniformity measurement and image processing algorithm evaluation for uncooled microbolometer infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Qian, Yunsheng; Chang, BenKang; Zhang, Junju; Xing, Suxia; Yu, Shuizhong; Yang, Ji

    2005-01-01

    The great achievements were achieved in the manufacturing of uncooled microbolometer infrared focal plane arrays(UFPA). By this technique infrared system can be made in the formation of small volume, light weight, low price and being portable. It promotes greatly the utilization of infrared system in many fields. The main disadvantage of UFPA is non-uniformity. Despite non-uniformity of UFPA has been greatly improved, non-uniformity still restricts the performance of uncooled infrared system. In this paper, the attention is focused on the technology and methods measuring the non-uniformity of UFPA. The system that can measure the non-uniformity of UFPA and evaluate the image processing algorithms is developed. The measurement system consists of blackbody, infrared optics, control units, processing circuit, high-speed A/D converter, computer and software. To obtain the output signals of UFPA, the drive circuit and control circuit of thermoelectric stabilizer(TEC) of UFPA are developed. In the drive circuit, the CPLD device is employed to insure a small size circuit. In the TEC circuit, a kind of highly integrated and cost-effective, high-effiency, switch-mode driver is used to insure temperature stability of 0.01°C. The system is used to measure non-uniformity of microbolometer detectors which are produced by ULIS company. It can also present the evaluation of algorithm. The results are given and analyzed.

  19. Characterization of electrothermal actuators and arrays fabricated in a four-level, planarized surface-micromachined polycrystalline silicon process

    SciTech Connect

    Comtois, J.H.; Michalicek, M.A.; Barron, C.C.

    1997-06-01

    This paper presents the results of tests performed on a variety of electrothermal microactuators and arrays of these actuators recently fabricated in the four-level planarized polycrystalline silicon (polysilicon) SUMMiT process at the U.S. Department of Energy`s Sandia National Laboratories. These results are intended to aid designers of thermally actuated mechanisms, and will apply to similar actuators made in other polysilicon MEMS processes. The measurements include force and deflection versus input power, maximum operating frequency, effects of long term operation, and ideal actuator and array geometries for different design criteria. A typical application in a stepper motor is shown to illustrate the utility of these actuators and arrays.

  20. Near real-time, on-the-move multisensor integration and computing framework

    NASA Astrophysics Data System (ADS)

    Burnette, Chris; Schneider, Matt; Agarwal, Sanjeev; Deterline, Diane; Geyer, Chris; Phan, Chung D.; Lydic, Richard M.; Green, Kevin; Swett, Bruce

    2015-05-01

    Implanted mines and improvised devices are a persistent threat to Warfighters. Current Army countermine missions for route clearance need on-the-move standoff detection to improve the rate of advance. Vehicle-based forward looking sensors such as electro-optical and infrared (EO/IR) devices can be used to identify potential threats in near real-time (NRT) at safe standoff distance to support route clearance missions. The MOVERS (Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System) is a vehicle-based multi-sensor integration and exploitation system that ingests and processes video and imagery data captured from forward-looking EO/IR and thermal sensors, and also generates target/feature alerts, using the Video Processing and Exploitation Framework (VPEF) "plug and play" video processing toolset. The MOVERS Framework provides an extensible, flexible, and scalable computing and multi-sensor integration GOTS framework that enables the capability to add more vehicles, sensors, processors or displays, and a service architecture that provides low-latency raw video and metadata streams as well as a command and control interface. Functionality in the framework is exposed through the MOVERS SDK which decouples the implementation of the service and client from the specific communication protocols.

  1. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  2. Planarized process for resonant leaky-wave coupled phase-locked arrays of mid-IR quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Chang, C.-C.; Kirch, J. D.; Boyle, C.; Sigler, C.; Mawst, L. J.; Botez, D.; Zutter, B.; Buelow, P.; Schulte, K.; Kuech, T.; Earles, T.

    2015-03-01

    On-chip resonant leaky-wave coupling of quantum cascade lasers (QCLs) emitting at 8.36 μm has been realized by selective regrowth of interelement layers in curved trenches, defined by dry and wet etching. The fabricated structure provides large index steps (Δn = 0.10) between antiguided-array element and interelement regions. In-phase-mode operation to 5.5 W front-facet emitted power in a near-diffraction-limited far-field beam pattern, with 4.5 W in the main lobe, is demonstrated. A refined fabrication process has been developed to produce phased-locked antiguided arrays of QCLs with planar geometry. The main fabrication steps in this process include non-selective regrowth of Fe:InP in interelement trenches, defined by inductive-coupled plasma (ICP) etching, a chemical polishing (CP) step to planarize the surface, non-selective regrowth of interelement layers, ICP selective etching of interelement layers, and non-selective regrowth of InP cladding layer followed by another CP step to form the element regions. This new process results in planar InGaAs/InP interelement regions, which allows for significantly improved control over the array geometry and the dimensions of element and interelement regions. Such a planar process is highly desirable to realize shorter emitting wavelength (4.6 μm) arrays, where fabrication tolerance for single-mode operation are tighter compared to 8 μm-emitting devices.

  3. Enhanced research in ground-penetrating radar and multisensor fusion with application to the detection and visualization of buried waste. Final report

    SciTech Connect

    Devney, A.J.; DiMarzio, C.; Kokar, M.; Miller, E.L.; Rappaport, C.M.; Weedon, W.H.

    1996-05-14

    Recognizing the difficulty and importance of the landfill remediation problems faced by DOE, and the fact that no one sensor alone can provide complete environmental site characterization, a multidisciplinary team approach was chosen for this project. The authors have developed a multisensor fusion approach that is suitable for the wide variety of sensors available to DOE, that allows separate detection algorithms to be developed and custom-tailored to each sensor. This approach is currently being applied to the Geonics EM-61 and Coleman step-frequency radar data. High-resolution array processing techniques were developed for detecting and localizing buried waste containers. A soil characterization laboratory facility was developed using a HP-8510 network analyzer and near-field coaxial probe. Both internal and external calibration procedures were developed for de-embedding the frequency-dependent soil electrical parameters from the measurements. Dispersive soil propagation modeling algorithms were also developed for simulating wave propagation in dispersive soil media. A study was performed on the application of infrared sensors to the landfill remediation problem, particularly for providing information on volatile organic compounds (VOC`s) in the atmosphere. A dust-emission lidar system is proposed for landfill remediation monitoring. Design specifications are outlined for a system which could be used to monitor dust emissions in a landfill remediation effort. The detailed results of the investigations are contained herein.

  4. Array processing for RFID tag localization exploiting multi-frequency signals

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Li, Xin; Amin, Moeness G.

    2009-05-01

    RFID is an increasingly valuable business and technology tool for electronically identifying, locating, and tracking products, assets, and personnel. As a result, precise positioning and tracking of RFID tags and readers have received considerable attention from both academic and industrial communities. Finding the position of RFID tags is considered an important task in various real-time locating systems (RTLS). As such, numerous RFID localization products have been developed for various applications. The majority of RFID positioning systems is based on the fusion of pieces of relevant information, such as the range and the direction-of-arrival (DOA). For example, trilateration can determine the tag position by using the range information of the tag estimated from three or more spatially separated reader antennas. Triangulation is another method to locate RFID tags that use the direction-of-arrival (DOA) information estimated at multiple spatially separated locations. The RFID tag positions can also be determined through hybrid techniques that combine the range and DOA information. The focus of this paper to study the design and performance of the localization of passive RFID tags using array processing techniques in a multipath environment, and exploiting multi-frequency CW signals. The latter are used to decorrelate the coherent multipath signals for effective DOA estimation and for the purpose of accurate range estimation. Accordingly, the spatial and frequency dimensionalities are fully utilized for robust and accurate positioning of RFID tags.

  5. Urban structure analysis of mega city Mexico City using multisensoral remote sensing data

    NASA Astrophysics Data System (ADS)

    Taubenböck, H.; Esch, T.; Wurm, M.; Thiel, M.; Ullmann, T.; Roth, A.; Schmidt, M.; Mehl, H.; Dech, S.

    2008-10-01

    Mega city Mexico City is ranked the third largest urban agglomeration to date around the globe. The large extension as well as dynamic urban transformation and sprawl processes lead to a lack of up-to-date and area-wide data and information to measure, monitor, and understand the urban situation. This paper focuses on the capabilities of multisensoral remotely sensed data to provide a broad range of products derived from one scientific field - remote sensing - to support urban managing and planning. Therefore optical data sets from the Landsat and Quickbird sensors as well as radar data from the Shuttle Radar Topography Mission (SRTM) and the TerraSAR-X sensor are utilised. Using the multi-sensoral data sets the analysis are scale-dependent. On the one hand change detection on city level utilising the derived urban footprints enables to monitor and to assess spatiotemporal urban transformation, areal dimension of urban sprawl, its direction, and the built-up density distribution over time. On the other hand, structural characteristics of an urban landscape - the alignment and types of buildings, streets and open spaces - provide insight in the very detailed physical pattern of urban morphology on higher scale. The results show high accuracies of the derived multi-scale products. The multi-scale analysis allows quantifying urban processes and thus leading to an assessment and interpretation of urban trends.

  6. Sub-band processing for grating lobe disambiguation in sparse arrays

    NASA Astrophysics Data System (ADS)

    Hersey, Ryan K.; Culpepper, Edwin

    2014-06-01

    Combined synthetic aperture radar (SAR) and ground moving target indication (GMTI) radar modes simultaneously generate SAR and GMTI products from the same radar data. This hybrid mode provides the benefit of combined imaging and moving target displays as well as improved target recognition. However, the differing system, antenna, and waveform requirements between SAR and GMTI modes make implementing the hybrid mode challenging. The Air Force Research Laboratory (AFRL) Gotcha radar has collected wide-bandwidth, multi-channel data that can be used for both SAR and GMTI applications. The spatial channels on the Gotcha array are sparsely separated, which causes spatial grating lobes during the digital beamforming process. Grating lobes have little impact on SAR, which typically uses a single spatial channel. However, grating lobes have a large impact on GMTI, where spatial channels are used to mitigate clutter and estimate the target angle of arrival (AOA). The AOA ambiguity has a significant impact in the Gotcha data, where detections from the sidelobes and skirts of the mainlobe wrap back into the main scene causing a significant number of false alarms. This paper presents a sub-banding method to disambiguate grating lobes in the GMTI processing. This method divides the wideband SAR data into multiple frequency sub-bands. Since each sub-band has a different center frequency, the grating lobes for each sub-band appear at different angles. The method uses this variation to disambiguate target returns and places them at the correct angle of arrival (AOA). Results are presented using AFRL Gotcha radar data.

  7. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    PubMed Central

    2008-01-01

    Fabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP) silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  8. Approaches to the Processing of Data from Large Aperture Acoustic Vertical Line Arrays

    DTIC Science & Technology

    1990-04-01

    50 3.4 GSM eigenrays across the very large vertical line array .............. 51 3.5 Conventional beam form er output...54 3.10 GSM eigenrays across the large vertical line array .................. 55 3.11 Conventional beam form er...GSM eigenrays at 162 km and at the sound axis .................... 80 4.8 ATLAS transmission loss versus range at 20 m depth ................ 81 4.9

  9. The Digital Signal Processing Platform for the Low Frequency Aperture Array: Preliminary Results on the Data Acquisition Unit

    NASA Astrophysics Data System (ADS)

    Naldi, Giovanni; Mattana, Andrea; Pastore, Sandro; Alderighi, Monica; Zarb Adami, Kristian; Schillirò, Francesco; Aminaei, Amin; Baker, Jeremy; Belli, Carolina; Comoretto, Gianni; Chiarucci, Simone; Chiello, Riccardo; D’Angelo, Sergio; Dalle Mura, Gabriele; De Marco, Andrea; Halsall, Rob; Magro, Alessio; Monari, Jader; Roberts, Matt; Perini, Federico; Poloni, Marco; Pupillo, Giuseppe; Rusticelli, Simone; Schiaffino, Marco; Zaccaro, Emanuele

    A signal processing hardware platform has been developed for the Low Frequency Aperture Array component of the Square Kilometre Array (SKA). The processing board, called an Analog Digital Unit (ADU), is able to acquire and digitize broadband (up to 500MHz bandwidth) radio-frequency streams from 16 dual polarized antennas, channel the data streams and then combine them flexibly as part of a larger beamforming system. It is envisaged that there will be more than 8000 of these signal processing platforms in the first phase of the SKA, so particular attention has been devoted to ensure the design is low-cost and low-power. This paper describes the main features of the data acquisition unit of such a platform and presents preliminary results characterizing its performance.

  10. Developing Smart Seismic Arrays: A Simulation Environment, Observational Database, and Advanced Signal Processing

    SciTech Connect

    Harben, P E; Harris, D; Myers, S; Larsen, S; Wagoner, J; Trebes, J; Nelson, K

    2003-09-15

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3D finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic

  11. Mechanically flexible wireless multisensor platform for human physical activity and vitals monitoring.

    PubMed

    Chuo, Y; Marzencki, M; Hung, B; Jaggernauth, C; Tavakolian, K; Lin, P; Kaminska, B

    2010-10-01

    Practical usability of the majority of current wearable body sensor systems for multiple parameter physiological signal acquisition is limited by the multiple physical connections between sensors and the data-acquisition modules. In order to improve the user comfort and enable the use of these types of systems on active mobile subjects, we propose a wireless body sensor system that incorporates multiple sensors on a single node. This multisensor node includes signal acquisition, processing, and wireless data transmission fitted on multiple layers of a thin flexible substrate with a very small footprint. Considerations for design include size, form factor, reliable body attachment, good signal coupling, low power consumption, and user convenience. The prototype device measures 55 15 mm and is 3 mm thick. The unit is attached to the patient's chest, and is capable of performing simultaneous measurements of parameters, such as body motion, activity intensity, tilt, respiration, cardiac vibration, cardiac potential (ECG), heart rate, and body surface temperature. In this paper, we discuss the architecture of this system, including the multisensor hardware, the firmware, a mobile-phone receiver unit, and assembly of the first proof-of-concept prototype. Preliminary performance results on key elements of the system, such as power consumption, wireless range, algorithm efficiency, ECG signal quality for heart-rate calculations, as well as synchronous ECG and body activity signals are also presented.

  12. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph

    PubMed Central

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-01-01

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570

  13. RheoStim: Development of an Adaptive Multi-Sensor to Prevent Venous Stasis

    PubMed Central

    Weyer, Sören; Weishaupt, Fabio; Kleeberg, Christian; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    Chronic venous insufficiency of the lower limbs is often underestimated and, in the absence of therapy, results in increasingly severe complications, including therapy-resistant tissue defects. Therefore, early diagnosis and adequate therapy is of particular importance. External counter pulsation (ECP) therapy is a method used to assist the venous system. The main principle of ECP is to squeeze the inner leg vessels by muscle contractions, which are evoked by functional electrical stimulation. A new adaptive trigger method is proposed, which improves and supplements the current therapeutic options by means of pulse synchronous electro-stimulation of the muscle pump. For this purpose, blood flow is determined by multi-sensor plethysmography. The hardware design and signal processing of this novel multi-sensor plethysmography device are introduced. The merged signal is used to determine the phase of the cardiac cycle, to ensure stimulation of the muscle pump during the filling phase of the heart. The pulse detection of the system is validated against a gold standard and provides a sensitivity of 98% and a false-negative rate of 2% after physical exertion. Furthermore, flow enhancement of the system has been validated by duplex ultrasonography. The results show a highly increased blood flow in the popliteal vein at the knee. PMID:27023544

  14. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.

    PubMed

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-03-21

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.

  15. Using GPU-generated virtual video stream for multi-sensor system

    NASA Astrophysics Data System (ADS)

    Liao, Dezhi; Hennessey, Brian

    2006-05-01

    Security and intelligence services are increasingly turning toward multi-sensor video surveillance which requires human ability to successfully fuse and comprehend the information provided by videos. A training system using the same front end as real multi-sensor system for users can significantly increase such human ability. The training system always needs scenarios replicating stressful situations which are videotaped in advance and played later. This not only puts a limitation on the training scenarios but also brings a high cost. This paper introduces a new framework, virtual video capture device for such training system. Using the latest graphics processing units (GPUs) technology, multiple video streams composed of computer graphics (CG) are generated on one high-end PC and ublished to a video stream server. Thus users can be trained using both real video streams and virtual video streams on one system. It also enables the training system to use real video streams incorporating augmented reality to improve situation awareness of the human.

  16. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety.

    PubMed

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-06-09

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.

  17. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    PubMed Central

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  18. Multisensor monitoring of sea surface state of the coastal zone

    NASA Astrophysics Data System (ADS)

    Lavrova, Olga; Mityagina, Marina; Bocharova, Tatina

    Results of many-year monitoring of the state of coastal zone based on a multisensor approach are presented. The monitoring is aimed at solving the following tasks: operational mapping of parameters characterizing the state and pollution (coastal, ship and biogenic) of water; analysis of meteorological state and its effect on the drift and spread of pollutants; study of coastal circulation patterns and their impact on the drift and spread of pollutants; deriving typical pollution distribution patterns in the coastal zone.Processing and analysis is performed using data in visual, infrared and microwave ranges from ERS-2 SAR, Envisat ASAR/MERIS, Terra and Aqua MODIS and NOAA AVHRR instruments. These are complimented with ground data from meteorological stations on the shore and results of satellite data processing of previous periods. The main regions of interest are the Russian sectors of the Black and Azov Seas, southeastern part of the Baltic Sea, and northern and central regions of the Caspian Sea. Adjacent coasts are extremely populated and have well-developed industry, agriculture and rapidly growing tourist sectors. The necessity of constant monitoring of the sea state there is obvious.The monitoring activities allow us to accumulate extensive material for the study of hydrodynamic processes in the regions, in particular water circulation. Detailing the occurrence, evolution and drift of smalland meso-scale vortex structures is crucial for the knowledge of the mechanisms determining mixing and circulation processes in the coastal zone. These mechanisms play an important role in ecological, hydrodynamic and meteorological status of a coastal zone. Special attention is paid to the sea surface state in the Kerch Strait, where a tanker catastrophe took place on November 11, 2007 causing a spillage of over 1.5 thousand tons of heavy oil. The Kerch Strait is characterized by a complex current system with current directions changing to their opposites depending on

  19. Application of bistable optical logic gate arrays to all-optical digital parallel processing

    NASA Astrophysics Data System (ADS)

    Walker, A. C.

    1986-05-01

    Arrays of bistable optical gates can form the basis of an all-optical digital parallel processor. Two classes of signal input geometry exist - on- and off-axis - and lead to distinctly different device characteristics. The optical implementation of multisignal fan-in to an array of intrinsically bistable optical gates using the more efficient off-axis option is discussed together with the construction of programmable read/write memories from optically bistable devices. Finally the design of a demonstration all-optical parallel processor incorporating these concepts is presented.

  20. The dynamic research and position estimation of the towed array during the U-turn process

    NASA Astrophysics Data System (ADS)

    Yang, J. X.; Shuai, C. G.; He, L.; Zhang, S. K.; Zhou, S. T.

    2016-09-01

    A dynamic model for estimating position of ship towed array during U-turn manoeuvre is introduced and developed. Based on this model, the influences of the parameters such as time step and segment length on the numerical simulation are analysed. The results indicate that decreasing the time step has little effect on the simulation accuracy but will increase the computational time. The selection of segment length has a great influence on the estimation of ship towed array position during U-turn manoeuvre. Reducing the segment length somewhat increases the computational complexity and significantly improves simulation precision.

  1. Optimization of Cyclostationary Signal Processing Algorithms Using Multiple Field Programmable Gate Arrays on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2009-09-01

    of the architecture is seen in Figure 36. Two major components make up the FPGA . The input/output blocks (IOBs) provide the interface between...programming language to make a complete program. To take advantage of FPGAs precise circuit design should be utilized by employing Verilog or VHDL. 47...runtime to approach real-time processing. The focus of the implementation is to utilize dual field programmable gate arrays ( FPGAs ) within a single

  2. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  3. Recognition Time for Letters and Nonletters: Effects of Serial Position, Array Size, and Processing Order.

    ERIC Educational Resources Information Center

    Mason, Mildred

    1982-01-01

    Three experiments report additional evidence that it is a mistake to account for all interletter effects solely in terms of sensory variables. These experiments attest to the importance of structural variables such as retina location, array size, and ordinal position. (Author/PN)

  4. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    NASA Astrophysics Data System (ADS)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD < 20 mK for short snapshot integration times of typically 2 ms. PtSi and QWIP modules are usually operated in a rolling frame integration mode with frame rates of 30 - 60 Hz and provide thermal resolutions of NETD < 80 mK for PtSi and NETD < 20 mK for QWIP, respectively. Due to the lower quantum efficiency compared to MCT, however, the integration time is typically chosen to be as long 10 - 20 ms. The heat load of the integrated detector cooler assemblies (IDCAs) could be reduced to an amount as low, that a 1 W split liner cooler provides sufficient cooling power to operate the modules -- including the QWIP with 60 K operation temperature -- at ambient temperatures up to 65 degrees Celsius. Miniaturized command/control electronics (CCE) available for all modules provide a standardized digital interface, with 14 bit analogue to digital conversion for state to the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. New modular image processing hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual

  5. Dynamic templating: a large area processing route for the assembly of periodic arrays of sub-micrometer and nanoscale structures

    NASA Astrophysics Data System (ADS)

    Farzinpour, Pouyan; Sundar, Aarthi; Gilroy, Kyle D.; Eskin, Zachary E.; Hughes, Robert A.; Neretina, Svetlana

    2013-02-01

    A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process proceeds. The dynamic template consists of a pedestal of a sacrificial material, typically antimony, upon which an ultrathin layer of a second material is deposited. When heated, antimony sublimation results in a continuous reduction in template size where the motion of the sublimation fronts direct the diffusion of atoms of the second material to a predetermined location. The route has broad applicability, having already produced periodic arrays of gold, silver, copper, platinum, nickel, cobalt, germanium and Au-Ag alloys on substrates as diverse as silicon, sapphire, silicon-carbide, graphene and glass. Requiring only modest levels of instrumentation, the process provides an enabling route for any reasonably equipped researcher to fabricate periodic arrays that would otherwise require advanced fabrication facilities.A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process

  6. Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    PubMed Central

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

  7. Geocoding and stereo display of tropical forest multisensor datasets

    NASA Technical Reports Server (NTRS)

    Welch, R.; Jordan, T. R.; Luvall, J. C.

    1990-01-01

    Concern about the future of tropical forests has led to a demand for geocoded multisensor databases that can be used to assess forest structure, deforestation, thermal response, evapotranspiration, and other parameters linked to climate change. In response to studies being conducted at the Braulino Carrillo National Park, Costa Rica, digital satellite and aircraft images recorded by Landsat TM, SPOT HRV, Thermal Infrared Multispectral Scanner, and Calibrated Airborne Multispectral Scanner sensors were placed in register using the Landsat TM image as the reference map. Despite problems caused by relief, multitemporal datasets, and geometric distortions in the aircraft images, registration was accomplished to within + or - 20 m (+ or - 1 data pixel). A digital elevation model constructed from a multisensor Landsat TM/SPOT stereopair proved useful for generating perspective views of the rugged, forested terrain.

  8. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    PubMed Central

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-01-01

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture. PMID:27727171

  9. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  10. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  11. Multi-sensor Mapping of Volcanic Plumes and Clouds

    NASA Astrophysics Data System (ADS)

    Realmuto, V. J.

    2006-12-01

    The instruments aboard the NASA series of Earth Observing System satellites provide a rich suite of measurements for the mapping of volcanic plumes and clouds. In this presentation we focus on analyses of data acquired with the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Atmospheric Infrared Sounder (AIRS), Moderate-Resolution Imaging Spectrometer (MODIS), and Multiangle Imaging SpectroRadiometer (MISR). ASTER, MODIS, AIRS, and MISR provide complimentary information on the quantity and distribution of sulfur dioxide, silicate ash, and sulfate aerosols within plumes. In addition, MISR data are used to derive estimates of cloud-top altitude, wind direction, and wind speed. The key to multi-sensor mapping is the availability of a standard set of tools for the processing of data from different instruments. To date we have used the MAP_SO2 toolkit to analyze the thermal infrared (TIR) data from MODIS, ASTER, and AIRS. MAP_SO2, a graphic user interface to the MODTRAN radiative transfer model, provides tools for the estimation of emissivity spectra, water vapor and ozone correction factors, surface temperature, and concentrations of SO2. We use the MISR_Shift toolkit to estimate plume-top altitudes and local wind vectors. Our continuous refinement of MAP_SO2 has resulted in lower detection limits for SO2 and lower sensitivity to the presence of sulfate aerosols and ash. Our plans for future refinements of MAP_SO2 include the incorporation of AIRS-based profiles of atmospheric temperature, water vapor and ozone, and MISR-based maps of plume-top altitude into the plume mapping procedures. The centerpiece of our study is a time-series of data acquired during the 2002-2003 and 2006 eruptions of Mount Etna. Time-series measurements are the only means of recording dynamic phenomena and characterizing the processes that generate such phenomena. We have also analyzed data acquired over Klychevskoy, Bezymianny, and Sheveluch (Kamchatka), Augustine

  12. Seismo-Acoustic Array Observations of Shallow Conduit Processes at Fuego Volcano, Guatemala

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Lyons, J. J.; Nadeau, P. A.

    2008-12-01

    We deployed small antennas of six broadband seismic and five acoustic sensors 900 m north of the active vent of Fuego volcano during January 2008 to investigate the source of explosions and background tremor. The L-shaped seismic array had stations spaced 30 m apart with one axis parallel to the ridge that runs north from the summit and the other axis down to the west for a total aperture of 150 m. The infrasound sensors were deployed in a similar array, but with an average station spacing of 50 m. There was no lava effusion during the four-day deployment, but explosions were clearly recorded with the seismic and acoustic arrays approximately once per hour with varied amounts of ash, and with durations from ~20-150 s. In addition to the explosions, our seismic array recorded constant volcanic tremor at 1.9 Hz and various discrete events that were not generally detected by the acoustic array. The dominant class of such events, which repeated approximately 10 times per hour, had an impulsive onset with first motion toward the vent, a short duration of <5 s, dominant frequencies from 1-3 Hz, and no infrasound component. All of the seismic signals were predominately surface waves radiating from the direction of the vent. Apparent velocities from overlapping 1 or 2 s windows of explosions decreased from 1-2 km/s at the onset to about 500 m/s at the arrival of the ground-coupled airwave. Events with no apparent infrasound also have low apparent velocities of 0.5 - 2 km/s, suggesting they are occurring at shallow depths. For these events, a weak P-wave arrival was typically observed about 200 ms before the shear- and surface-wave train. We also recorded some explosions that had very little seismic signal until the arrival of the ground-coupled airwave. Source inversion was not possible due to the limited array geometry, but we used forward modeling of candidate source geometries to infer differences between the sources of the dominant seismic signals. Constraints from

  13. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    NASA Astrophysics Data System (ADS)

    Due-Hansen, J.; Midtbø, K.; Poppe, E.; Summanwar, A.; Jensen, G. U.; Breivik, L.; Wang, D. T.; Schjølberg-Henriksen, K.

    2012-07-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging.

  14. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  15. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    SciTech Connect

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-06-15

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns.

  16. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka.

  17. Simultaneous processing of photographic and accelerator array data from sled impact experiment

    NASA Astrophysics Data System (ADS)

    Ash, M. E.

    1982-12-01

    A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.

  18. Capillary-Induced Self-Organization of Soft Pillar Arrays into Moir'e Patterns by Dynamic Feedback Process

    NASA Astrophysics Data System (ADS)

    Kang, Sung; Wu, Ning; Grinthal, Alison; Aizenberg, Joanna

    2012-02-01

    We report a self-organized pattern formation of polymer nanopillar arrays by dynamic feedback: two nanopillar arrays collectively structure a sandwiched liquid and pattern the menisci, which bend the pillars into Moir'e patterns as it evaporates. Like the conventional Moir'e phenomenon, the patterns are deterministic and tunable by mismatch angle, yet additional behaviors---chirality from achiral starting motifs and preservation of the patterns after the surfaces are separated---appear from the feedback process. Patterning menisci based on this mechanism provides a simple, scalable approach for making a series of complex, long-range-ordered structures. Reference: Sung H. Kang, Ning Wu, Alison Grinthal, and Joanna Aizenberg, Phys. Rev. Lett., 107, 177802 (2011).

  19. Process Study of Oceanic Responses to Typhoons Using Arrays of EM-APEX Floats and Moorings

    DTIC Science & Technology

    2015-03-01

    Arrays of EM-APEX Floats and Moorings Ren-Chieh Lien Applied Physics Laboratory University of Washington 1013 NE 40th Street Seattle, Washington...Long-term observations of atmospheric forcing and upper oceanic conditions were made by moorings in the western Pacific Ocean, in collaboration with...of ITOP), subsurface temperature measurements on the moorings were transmitted via Iridium satellite, and one upward-looking 75-kHz Long Ranger ADCP

  20. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  1. Large Enhancement of Field Emission from ZnO Nanocone Arrays via Patterning Process

    NASA Astrophysics Data System (ADS)

    Le Shim, Ee; Bae, Joonho; Yoo, Eunji; Kang, Chijung; Choi, Young Jin

    2010-11-01

    We report on the direct observation of enhanced field emissions from patterned ZnO nanocones compared with the plain geometry of ZnO nanocones. For the unambiguous comparison of field emissions from patterned nanocones and plain(nonpatterned) nanocones, periodic arrays of ZnO nanowires were fabricated on Si by photolithography, RCA-1(aq) solution etching, and the hydrothermal growth method. The conelike morphology formation was achieved by anisotropic etching on the different crystal planes of ZnO nanowires in an aqueous solution of acetic acid [CH3COOH(aq)]. As the control sample of plane ZnO nanocones, the ZnO nanowires with a plain geometry were synthesized under the same conditions as the patterned sample. The field emission measurements on the plain ZnO nanocones and patterned ZnO nanocones reveal that the turn-on field decreases from 6.0 V/µm (plane nanocone arrays) to 3.8 V/µm (patterned nanocone arrays).

  2. Large Enhancement of Field Emission from ZnO Nanocone Arrays via Patterning Process

    NASA Astrophysics Data System (ADS)

    Shim, Ee Le; Bae, Joonho; Yoo, Eunji; Kang, Chijung; Choi, Young Jin

    2010-11-01

    We report on the direct observation of enhanced field emissions from patterned ZnO nanocones compared with the plain geometry of ZnO nanocones. For the unambiguous comparison of field emissions from patterned nanocones and plain(nonpatterned) nanocones, periodic arrays of ZnO nanowires were fabricated on Si by photolithography, RCA-1(aq) solution etching, and the hydrothermal growth method. The conelike morphology formation was achieved by anisotropic etching on the different crystal planes of ZnO nanowires in an aqueous solution of acetic acid [CH3COOH(aq)]. As the control sample of plane ZnO nanocones, the ZnO nanowires with a plain geometry were synthesized under the same conditions as the patterned sample. The field emission measurements on the plain ZnO nanocones and patterned ZnO nanocones reveal that the turn-on field decreases from 6.0 V/μm (plane nanocone arrays) to 3.8 V/μm (patterned nanocone arrays).

  3. Array tomography: imaging stained arrays.

    PubMed

    Micheva, Kristina D; O'Rourke, Nancy; Busse, Brad; Smith, Stephen J

    2010-11-01

    Array tomography is a volumetric microscopy method based on physical serial sectioning. Ultrathin sections of a plastic-embedded tissue are cut using an ultramicrotome, bonded in an ordered array to a glass coverslip, stained as desired, and imaged. The resulting two-dimensional image tiles can then be reconstructed computationally into three-dimensional volume images for visualization and quantitative analysis. The minimal thickness of individual sections permits high-quality rapid staining and imaging, whereas the array format allows reliable and convenient section handling, staining, and automated imaging. Also, the physical stability of the arrays permits images to be acquired and registered from repeated cycles of staining, imaging, and stain elution, as well as from imaging using multiple modalities (e.g., fluorescence and electron microscopy). Array tomography makes it possible to visualize and quantify previously inaccessible features of tissue structure and molecular architecture. However, careful preparation of the tissue is essential for successful array tomography; these steps can be time-consuming and require some practice to perfect. In this protocol, tissue arrays are imaged using conventional wide-field fluorescence microscopy. Images can be captured manually or, with the appropriate software and hardware, the process can be automated.

  4. Hierarchical multisensor analysis for robotic exploration

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi; Majani, Eric

    1991-01-01

    Robotic vehicles for lunar and Mars exploration will carry an array of complex instruments requiring real-time data interpretation and fusion. The system described here uses hierarchical multiresolution analysis of visible and multispectral images to extract information on mineral composition, texture and object shape. This information is used to characterize the site geology and choose interesting samples for acquisition. Neural networks are employed for many data analysis steps. A decision tree progressively integrates information from multiple instruments and performs goal-driven decision making. The system is designed to incorporate more instruments and data types as they become available.

  5. Hierarchical multisensor analysis for robotic exploration

    NASA Astrophysics Data System (ADS)

    Eberlein, Susan; Yates, Gigi; Majani, Eric

    1991-03-01

    Robotic vehicles for lunar and Mars exploration will carry an array of complex instruments requiring real-time data interpretation and fusion. The system described here uses hierarchical multiresolution analysis of visible and multispectral images to extract information on mineral composition, texture and object shape. This information is used to characterize the site geology and choose interesting samples for acquisition. Neural networks are employed for many data analysis steps. A decision tree progressively integrates information from multiple instruments and performs goal-driven decision making. The system is designed to incorporate more instruments and data types as they become available.

  6. Real-time processing of fast-scan cyclic voltammetry (FSCV) data using a field-programmable gate array (FPGA).

    PubMed

    Bozorgzadeh, Bardia; Covey, Daniel P; Heidenreich, Byron A; Garris, Paul A; Mohseni, Pedram

    2014-01-01

    This paper reports the hardware implementation of a digital signal processing (DSP) unit for real-time processing of data obtained by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM), an electrochemical transduction technique for high-resolution monitoring of brain neurochemistry. Implemented on a field-programmable gate array (FPGA), the DSP unit comprises a decimation filter and an embedded processor to process the oversampled FSCV data and obtain in real time a temporal profile of concentration variation along with a chemical signature to identify the target neurotransmitter. Interfaced with an integrated, FSCV-sensing front-end, the DSP unit can successfully process FSCV data obtained by bolus injection of dopamine in a flow cell as well as electrically evoked, transient dopamine release in the dorsal striatum of an anesthetized rat.

  7. A data fusion algorithm for multi-sensor microburst hazard assessment

    NASA Technical Reports Server (NTRS)

    Wanke, Craig R.; Hansman, R. John

    1994-01-01

    A recursive model-based data fusion algorithm for multi-sensor microburst hazard assessment is described. An analytical microburst model is used to approximate the actual windfield, and a set of 'best' model parameters are estimated from measured winds. The winds corresponding to the best parameter set can then be used to compute alerting factors such as microburst position, extent, and intensity. The estimation algorithm is based on an iterated extended Kalman filter which uses the microburst model parameters as state variables. Microburst state dynamic and process noise parameters are chosen based on measured microburst statistics. The estimation method is applied to data from a time-varying computational simulation of a historical microburst event to demonstrate its capabilities and limitations. Selection of filter parameters and initial conditions is discussed. Computational requirements and datalink bandwidth considerations are also addressed.

  8. Application of multi-sensors parallel fusion system in photoelectric tracing

    NASA Astrophysics Data System (ADS)

    Cheng, Guo-ying; Cai, Sheng; Gao, Hui-bin; Zhang, Shu-mei; Qiao, Yan-Feng

    2008-12-01

    To solve the real-time and reliability problem of tracking servo-control system in optoelectronic theodolite, a multisensors parallel processing system was proposed. Misdistances of three different wavebands were imported into system, and then prediction was done in DSP1 to get the actual position information. Data fusion was accomplished in PPGA imported by multi channel buffer serial port. The compound position information was used to control the theodolite. The results were compared with external guide data in DSP2 to implement correction of above calculation, and then were imported to epistemic machine through PXI interface. The simulation experiment of each calculation unit showed that this system could solve the real-time problem of feature level data fusion. The simulation result showed that the system can satisfy the real-time requirement with 1.25ms in theodolite with three imaging systems, while sampling frequency of photoelectric encoder was 800 Hz.

  9. Light absorption processes and optimization of ZnO/CdTe core-shell nanowire arrays for nanostructured solar cells.

    PubMed

    Michallon, Jérôme; Bucci, Davide; Morand, Alain; Zanuccoli, Mauro; Consonni, Vincent; Kaminski-Cachopo, Anne

    2015-02-20

    The absorption processes of extremely thin absorber solar cells based on ZnO/CdTe core-shell nanowire (NW) arrays with square, hexagonal or triangular arrangements are investigated through systematic computations of the ideal short-circuit current density using three-dimensional rigorous coupled wave analysis. The geometrical dimensions are optimized for optically designing these solar cells: the optimal NW diameter, height and array period are of 200 ± 10 nm, 1-3 μm and 350-400 nm for the square arrangement with CdTe shell thickness of 40-60 nm. The effects of the CdTe shell thickness on the absorption of ZnO/CdTe NW arrays are revealed through the study of two optical key modes: the first one is confining the light into individual NWs, the second one is strongly interacting with the NW arrangement. It is also shown that the reflectivity of the substrate can improve Fabry-Perot resonances within the NWs: the ideal short-circuit current density is increased by 10% for the ZnO/fluorine-doped tin oxide (FTO)/ideal reflector as compared to the ZnO/FTO/glass substrate. Furthermore, the optimized square arrangement absorbs light more efficiently than both optimized hexagonal and triangular arrangements. Eventually, the enhancement factor of the ideal short-circuit current density is calculated as high as 1.72 with respect to planar layers, showing the high optical potentiality of ZnO/CdTe core-shell NW arrays.

  10. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.

    2007-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  11. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab

    NASA Technical Reports Server (NTRS)

    Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.

    2008-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  12. NASA 1990 Multisensor Airborne Campaigns (MACs) for ecosystem and watershed studies

    NASA Technical Reports Server (NTRS)

    Wickland, Diane E.; Asrar, Ghassem; Murphy, Robert E.

    1991-01-01

    The Multisensor Airborne Campaign (MAC) focus within NASA's former Land Processes research program was conceived to achieve the following objectives: to acquire relatively complete, multisensor data sets for well-studied field sites, to add a strong remote sensing science component to ecology-, hydrology-, and geology-oriented field projects, to create a research environment that promotes strong interactions among scientists within the program, and to more efficiently utilize and compete for the NASA fleet of remote sensing aircraft. Four new MAC's were conducted in 1990: the Oregon Transect Ecosystem Research (OTTER) project along an east-west transect through central Oregon, the Forest Ecosystem Dynamics (FED) project at the Northern Experimental Forest in Howland, Maine, the MACHYDRO project in the Mahantango Creek watershed in central Pennsylvania, and the Walnut Gulch project near Tombstone, Arizona. The OTTER project is testing a model that estimates the major fluxes of carbon, nitrogen, and water through temperate coniferous forest ecosystems. The focus in the project is on short time-scale (days-year) variations in ecosystem function. The FED project is concerned with modeling vegetation changes of forest ecosystems using remotely sensed observations to extract biophysical properties of forest canopies. The focus in this project is on long time-scale (decades to millenia) changes in ecosystem structure. The MACHYDRO project is studying the role of soil moisture and its regulating effects on hydrologic processes. The focus of the study is to delineate soil moisture differences within a basin and their changes with respect to evapotranspiration, rainfall, and streamflow. The Walnut Gulch project is focused on the effects of soil moisture in the energy and water balance of arid and semiarid ecosystems and their feedbacks to the atmosphere via thermal forcing.

  13. NASA 1990 Multisensor Airborne Campaigns (MACs) for ecosystem and watershed studies

    NASA Astrophysics Data System (ADS)

    Wickland, Diane E.; Asrar, Ghassem; Murphy, Robert E.

    The Multisensor Airborne Campaign (MAC) focus within NASA's former Land Processes research program was conceived to achieve the following objectives: to acquire relatively complete, multisensor data sets for well-studied field sites, to add a strong remote sensing science component to ecology-, hydrology-, and geology-oriented field projects, to create a research environment that promotes strong interactions among scientists within the program, and to more efficiently utilize and compete for the NASA fleet of remote sensing aircraft. Four new MAC's were conducted in 1990: the Oregon Transect Ecosystem Research (OTTER) project along an east-west transect through central Oregon, the Forest Ecosystem Dynamics (FED) project at the Northern Experimental Forest in Howland, Maine, the MACHYDRO project in the Mahantango Creek watershed in central Pennsylvania, and the Walnut Gulch project near Tombstone, Arizona. The OTTER project is testing a model that estimates the major fluxes of carbon, nitrogen, and water through temperate coniferous forest ecosystems. The focus in the project is on short time-scale (days-year) variations in ecosystem function. The FED project is concerned with modeling vegetation changes of forest ecosystems using remotely sensed observations to extract biophysical properties of forest canopies. The focus in this project is on long time-scale (decades to millenia) changes in ecosystem structure. The MACHYDRO project is studying the role of soil moisture and its regulating effects on hydrologic processes. The focus of the study is to delineate soil moisture differences within a basin and their changes with respect to evapotranspiration, rainfall, and streamflow. The Walnut Gulch project is focused on the effects of soil moisture in the energy and water balance of arid and semiarid ecosystems and their feedbacks to the atmosphere via thermal forcing.

  14. Phonon processes in vertically aligned silicon nanowire arrays produced by low-cost all-solution galvanic displacement method

    NASA Astrophysics Data System (ADS)

    Banerjee, Debika; Trudeau, Charles; Gerlein, Luis Felipe; Cloutier, Sylvain G.

    2016-03-01

    The nanoscale engineering of silicon can significantly change its bulk optoelectronic properties to make it more favorable for device integration. Phonon process engineering is one way to enhance inter-band transitions in silicon's indirect band structure alignment. This paper demonstrates phonon localization at the tip of silicon nanowires fabricated by galvanic displacement using wet electroless chemical etching of a bulk silicon wafer. High-resolution Raman micro-spectroscopy reveals that such arrayed structures of silicon nanowires display phonon localization behaviors, which could help their integration into the future generations of nano-engineered silicon nanowire-based devices such as photodetectors and solar cells.

  15. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to

  16. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  17. Analysis of process parameter effect on DIBL in n-channel MOSFET device using L27 orthogonal array

    NASA Astrophysics Data System (ADS)

    Salehuddin, F.; Kaharudin, K. E.; Zain, A. S. M.; Yamin, A. K. Mat; Ahmad, I.

    2014-10-01

    In this research, the effect of the process parameters variation on drain induced barrier lowering (DIBL) was investigated. The fabrication of the transistor device was performed using TCAD simulator, consisting of ATHENA and ATLAS modules. These two modules were combined with Taguchi method to optimize the process parameters. The setting of process parameters was determined by using the orthogonal array of L27 in Taguchi Method. In NMOS device, the most dominant or significant factors for S/N Ratio are halo implant energy, S/D implant dose and S/D implant energy. Meanwhile, the S/N Ratio values of DIBL after the optimization approaches for array L27 is 29.42 dB. In L27 experiments, DIBL value for n-channel MOSFET device after optimizations approaches is +37.8 mV. The results obtained were satisfied to be small as expected. As conclusions, by setting up design of experiment with the Taguchi Method and TCAD simulator, the optimal solutions on DIBL for the robust design recipe of 32nm n-channel MOSFET device was successfully achieved.

  18. True-time-delay transmit/receive optical beam-forming system for phased arrays and other signal processing applications

    NASA Astrophysics Data System (ADS)

    Toughlian, Edward N.; Zamuda, H.; Carter, Charity A.

    1994-06-01

    This paper addresses the problem of dynamic optical processing for the control of phased array antennas. The significant result presented is the demonstration of a continuously variable photonic RF/microwave delay line. Specifically, it is shown that by applying spatial frequency dependent optical phase compensation in an optical heterodyne process, variable RF delay can be achieved over a prescribed frequency band. Experimental results which demonstrate the performance of the delay line with regard to both maximum delay and resolution over a broad bandwidth are presented. Additionally, a spatially integrated optical system is proposed for control of phased array antennas. The integrated system provides mechanical stability, essentially eliminates the drift problems associated with free space optical systems, and can provide high packing density. This approach uses a class of spatial light modulator known as a deformable mirror device and leads to a steerable arbitrary antenna radiation pattern of the true time delay type. Also considered is the ability to utilize the delay line as a general photonic signal processing element in an adaptive (reconfigurable) transversal frequency filter configuration. Such systems are widely applicable in jammer/noise canceling systems, broadband ISDN, spread spectrum secure communications and the like.

  19. Direct fabrication of compound-eye microlens array on curved surfaces by a facile femtosecond laser enhanced wet etching process

    NASA Astrophysics Data System (ADS)

    Bian, Hao; Wei, Yang; Yang, Qing; Chen, Feng; Zhang, Fan; Du, Guangqing; Yong, Jiale; Hou, Xun

    2016-11-01

    We report a direct fabrication of an omnidirectional negative microlens array on a curved substrate by a femtosecond laser enhanced chemical etching process, which is utilized as a molding template for duplicating bioinspired compound eyes. The femtosecond laser treatment of the curved glass substrate employs a common x-y-z stage without rotating the sample surface perpendicular to the laser beam, and uniform, omnidirectional-aligned negative microlenses are generated after a hydrofluoric acid etching. Using the negative microlens array on the concave glass substrate as a molding template, we fabricate an artificial compound eye with 3000 positive microlenses of 95-μm diameter close-packed on a 5-mm polymer hemisphere. Compared to the transferring process, the negative microlenses directly fabricated on the curved mold by our method are distortion-free, and the duplicated artificial eye presents clear and uniform imaging capabilities. This work provides a facile and efficient route to the fabrication of microlenses on any curved substrates without complicated alignment and motion control processes, which has the potential for the development of new microlens-based devices and systems.

  20. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    SciTech Connect

    Wlodarczyk, Krystian L. Maier, Robert R. J.; Hand, Duncan P.; Bryce, Emma; Hutson, David; Kirk, Katherine; Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Strachan, Mel

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  1. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications.

    PubMed

    Wlodarczyk, Krystian L; Bryce, Emma; Schwartz, Noah; Strachan, Mel; Hutson, David; Maier, Robert R J; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Kirk, Katherine; Hand, Duncan P

    2014-02-01

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  2. Primary Dendrite Array: Observations from Ground-Based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Grugel, Richard N.; Erdman, Robert G.; Poirier, David R.

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST). Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K per centimeter (MICAST6) and 28 K per centimeter (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 millimeters per second) and a speed decrease (MICAST7-from 20 to 10 millimeters per second). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  3. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  4. From spin noise to systematics: stochastic processes in the first International Pulsar Timing Array data release

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.

    2016-05-01

    We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.

  5. RadMAP: The Radiological Multi-sensor Analysis Platform

    NASA Astrophysics Data System (ADS)

    Bandstra, Mark S.; Aucott, Timothy J.; Brubaker, Erik; Chivers, Daniel H.; Cooper, Reynold J.; Curtis, Joseph C.; Davis, John R.; Joshi, Tenzing H.; Kua, John; Meyer, Ross; Negut, Victor; Quinlan, Michael; Quiter, Brian J.; Srinivasan, Shreyas; Zakhor, Avideh; Zhang, Richard; Vetter, Kai

    2016-12-01

    The variability of gamma-ray and neutron background during the operation of a mobile detector system greatly limits the ability of the system to detect weak radiological and nuclear threats. The natural radiation background measured by a mobile detector system is the result of many factors, including the radioactivity of nearby materials, the geometric configuration of those materials and the system, the presence of absorbing materials, and atmospheric conditions. Background variations tend to be highly non-Poissonian, making it difficult to set robust detection thresholds using knowledge of the mean background rate alone. The Radiological Multi-sensor Analysis Platform (RadMAP) system is designed to allow the systematic study of natural radiological background variations and to serve as a development platform for emerging concepts in mobile radiation detection and imaging. To do this, RadMAP has been used to acquire extensive, systematic background measurements and correlated contextual data that can be used to test algorithms and detector modalities at low false alarm rates. By combining gamma-ray and neutron detector systems with data from contextual sensors, the system enables the fusion of data from multiple sensors into novel data products. The data are curated in a common format that allows for rapid querying across all sensors, creating detailed multi-sensor datasets that are used to study correlations between radiological and contextual data, and develop and test novel techniques in mobile detection and imaging. In this paper we will describe the instruments that comprise the RadMAP system, the effort to curate and provide access to multi-sensor data, and some initial results on the fusion of contextual and radiological data.

  6. Satellite Data Simulator Unit: A Multisensor, Multispectral Satellite Simulator Package

    NASA Technical Reports Server (NTRS)

    Masunaga, Hirohiko; Matsui, Toshihisa; Tao, Wei-Kuo; Hou, Arthur Y.; Kummerow, Christian D.; Nakajima, Teruyuki; Bauer, Peter; Olson, William S.; Sekiguchi, Miho; Nakajima, Teruyuki

    2010-01-01

    Several multisensor simulator packages are being developed by different research groups across the world. Such simulator packages [e.g., COSP , CRTM, ECSIM, RTTO, ISSARS (under development), and SDSU (this article), among others] share overall aims, although some are targeted more on particular satellite programs or specific applications (for research purposes or for operational use) than others. The SDSU or Satellite Data Simulator Unit is a general-purpose simulator composed of Fortran 90 codes and applicable to spaceborne microwave radiometer, radar, and visible/infrared imagers including, but not limited to, the sensors listed in a table. That shows satellite programs particularly suitable for multisensor data analysis: some are single satellite missions carrying two or more instruments, while others are constellations of satellites flying in formation. The TRMM and A-Train are ongoing satellite missions carrying diverse sensors that observe clouds and precipitation, and will be continued or augmented within the decade to come by future multisensor missions such as the GPM and Earth-CARE. The ultimate goals of these present and proposed satellite programs are not restricted to clouds and precipitation but are to better understand their interactions with atmospheric dynamics/chemistry and feedback to climate. The SDSU's applicability is not technically limited to hydrometeor measurements either, but may be extended to air temperature and humidity observations by tuning the SDSU to sounding channels. As such, the SDSU and other multisensor simulators would potentially contribute to a broad area of climate and atmospheric sciences. The SDSU is not optimized to any particular orbital geometry of satellites. The SDSU is applicable not only to low-Earth orbiting platforms as listed in Table 1, but also to geostationary meteorological satellites. Although no geosynchronous satellite carries microwave instruments at present or in the near future, the SDSU would be

  7. A Bayesian tracker for multi-sensor passive narrowband fusion

    NASA Astrophysics Data System (ADS)

    Pirkl, Ryan J.; Aughenbaugh, Jason M.

    2016-05-01

    We demonstrate the detection and localization performance of a multi-sensor, passive sonar Bayesian tracker for underwater targets emitting narrowband signals in the presence of realistic underwater ambient noise. Our evaluation focuses on recent advances in the formulation of the likelihood function used by the tracker that provide greater robustness in the presence of both realistic environmental noise and imprecise/inaccurate a priori knowledge of the target's narrowband signal. These improvements enable the tracker to reliably detect and localize narrowband emitters for a broader range of propagation environments, target velocities, and inherent uncertainty in a priori knowledge.

  8. Fault detection and isolation for multisensor navigation systems

    NASA Technical Reports Server (NTRS)

    Kline, Paul A.; Vangraas, Frank

    1991-01-01

    Increasing attention is being given to the problem of erroneous measurement data for multisensor navigation systems. A recursive estimator can be used in conjunction with a 'snapshot' batch estimator to provide fault detection and isolation (FDI) for these systems. A recursive estimator uses past system states to form a new state estimate and compares it to the calculated state based on a new set of measurements. A 'snapshot' batch estimator uses a set of measurements collected simultaneously and compares solutions based on subsets of measurements. The 'snapshot' approach requires redundant measurements in order to detect and isolate faults. FDI is also referred to as Receiver Autonomous Integrity Monitoring (RAIM).

  9. The use of multisensor images for Earth Science applications

    NASA Technical Reports Server (NTRS)

    Evans, D.; Stromberg, B.

    1983-01-01

    The use of more than one remote sensing technique is particularly important for Earth Science applications because of the compositional and textural information derivable from the images. The ability to simultaneously analyze images acquired by different sensors requires coregistration of the multisensor image data sets. In order to insure pixel to pixel registration in areas of high relief, images must be rectified to eliminate topographic distortions. Coregistered images can be analyzed using a variety of multidimensional techniques and the acquired knowledge of topographic effects in the images can be used in photogeologic interpretations.

  10. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  11. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  12. Elastomeric inverse moulding and vacuum casting process characterization for the fabrication of arrays of concave refractive microlenses

    NASA Astrophysics Data System (ADS)

    Desmet, L.; Van Overmeire, S.; Van Erps, J.; Ottevaere, H.; Debaes, C.; Thienpont, H.

    2007-01-01

    We present a complete and precise quantitative characterization of the different process steps used in an elastomeric inverse moulding and vacuum casting technique. We use the latter replication technique to fabricate concave replicas from an array of convex thermal reflow microlenses. During the inverse elastomeric moulding we obtain a secondary silicone mould of the original silicone mould in which the master component is embedded. Using vacuum casting, we are then able to cast out of the second mould several optical transparent poly-urethane arrays of concave refractive microlenses. We select ten particular representative microlenses on the original, the silicone moulds and replica sample and quantitatively characterize and statistically compare them during the various fabrication steps. For this purpose, we use several state-of-the-art and ultra-precise characterization tools such as a stereo microscope, a stylus surface profilometer, a non-contact optical profilometer, a Mach-Zehnder interferometer, a Twyman-Green interferometer and an atomic force microscope to compare various microlens parameters such as the lens height, the diameter, the paraxial focal length, the radius of curvature, the Strehl ratio, the peak-to-valley and the root-mean-square wave aberrations and the surface roughness. When appropriate, the microlens parameter under test is measured with several different measuring tools to check for consistency in the measurement data. Although none of the lens samples shows diffraction-limited performance, we prove that the obtained replicated arrays of concave microlenses exhibit sufficiently low surface roughness and sufficiently high lens quality for various imaging applications.

  13. Double-sided anodic titania nanotube arrays: a lopsided growth process.

    PubMed

    Sun, Lidong; Zhang, Sam; Sun, Xiao Wei; Wang, Xiaoyan; Cai, Yanli

    2010-12-07

    In the past decade, the pore diameter of anodic titania nanotubes was reported to be influenced by a number of factors in organic electrolyte, for example, applied potential, working distance, water content, and temperature. All these were closely related to potential drop in the organic electrolyte. In this work, the essential role of electric field originating from the potential drop was directly revealed for the first time using a simple two-electrode anodizing method. Anodic titania nanotube arrays were grown simultaneously at both sides of a titanium foil, with tube length being longer at the front side than that at the back side. This lopsided growth was attributed to the higher ionic flux induced by electric field at the front side. Accordingly, the nanotube length was further tailored to be comparable at both sides by modulating the electric field. These results are promising to be used in parallel configuration dye-sensitized solar cells, water splitting, and gas sensors, as a result of high surface area produced by the double-sided architecture.

  14. Investigation of proposed process sequence for the array automated assembly task, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-01-01

    Progress was made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The processes that were retained for the selected process sequence, spin-on diffusion, print and fire aluminum p+ back, clean, print and fire silver front contact and apply tin pad to aluminum back, were evaluated for their cost contribution.

  15. Based on Multi-sensor Information Fusion Algorithm of TPMS Research

    NASA Astrophysics Data System (ADS)

    Yulan, Zhou; Yanhong, Zang; Yahong, Lin

    In the paper are presented algorithms of TPMS (Tire Pressure Monitoring System) based on multi-sensor information fusion. A Unified mathematical models of information fusion are constructed and three algorithms are used to deal with, which include algorithm based on Bayesian, algorithm based on the relative distance (an improved algorithm of bayesian theory of evidence), algorithm based on multi-sensor weighted fusion. The calculating results shows that the algorithm based on d-s evidence theory of multisensor fusion method better than the algorithm the based on information fusion method or the bayesian method.

  16. What can hafnium isotope ratios arrays tell us about orogenic processes? An insight into geodynamic processes operating in the Alpine/Mediterranean region

    NASA Astrophysics Data System (ADS)

    Henderson, B.; Murphy, J.; Collins, W. J.; Hand, M. P.

    2013-12-01

    Over the last decade, technological advances in laser-ablation sampling techniques have resulted in an increase in the number of combined U-Pb-Hf zircon isotope studies used to investigate crustal evolution on a local, regional and global scale. Hafnium isotope arrays over large time scales (>500 myr) have been interpreted to track evolving plate tectonic configurations, and the geological outputs associated with changing plate boundaries. We use the Alpine-Mediterranean region as an example of how hafnium isotope arrays record the geodynamic processes associated with the complex geological evolution of a region. The geology of Alpine-Mediterranean region preserves a complex, semi-continuous tectonic history that extends from the Neoproterozoic to the present day. Major components of the Variscan and Alpine orogens are microcontinental ribbons derived from the northern Gondwanan margin, which were transferred to the Eurasian plate during the opening and closing of the Rheic and Paleo-Tethys Oceans. Convergence of the Eurasian and African plates commenced in the Mid-Late Cretaceous, following the destruction of the Alpine-Tethys Ocean during the terminal breakup of Pangea. In general, convergence occurred slowly and is characterised by northward accretion of Gondwanan fragments, interspersed with subduction of African lithosphere and intermittent roll-back events. A consequence of this geodynamic scenario was periods of granite-dominated magmatism in an arc-backarc setting. New Hf isotope data from the peri-Gondwanan terranes (Iberia, Meguma and Avalonia) and a compilation of existing Phanerozoic data from the Alpine-Mediterranean region, indicate ~500 myr (Cambrian-Recent) of reworking of peri-Gondwanan crust. The eHf array follows a typical crustal evolution pattern (Lu/Hf=0.015) and is considered to reflect reworking of juvenile peri-Gondwanan (Neoproterozoic) crust variably mixed with an older (~1.8-2.0 Ga) source component, probably Eburnian crust from the West

  17. Research and development of low cost processes for integrated solar arrays

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Crossman, L. D.

    1975-01-01

    Si reduction, purification and sheet generation work has been concentrated on gaining information about a reduction process combined with purification (higher purity arc furnace with gas blowing and gradient freezing), transport process with purification and polycrystal sheet growth potential (SiF2), plastic deformation for sheet generation, and float zone sheet recrystallization.

  18. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    NASA Astrophysics Data System (ADS)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  19. Monitoring changes in behaviour from multi-sensor systems.

    PubMed

    Amor, James D; James, Christopher J

    2014-10-01

    Behavioural patterns are important indicators of health status in a number of conditions and changes in behaviour can often indicate a change in health status. Currently, limited behaviour monitoring is carried out using paper-based assessment techniques. As technology becomes more prevalent and low-cost, there is an increasing movement towards automated behaviour-monitoring systems. These systems typically make use of a multi-sensor environment to gather data. Large data volumes are produced in this way, which poses a significant problem in terms of extracting useful indicators. Presented is a novel method for detecting behavioural patterns and calculating a metric for quantifying behavioural change in multi-sensor environments. The data analysis method is shown and an experimental validation of the method is presented which shows that it is possible to detect the difference between weekdays and weekend days. Two participants are analysed, with different sensor configurations and test environments and in both cases, the results show that the behavioural change metric for weekdays and weekend days is significantly different at 95% confidence level, using the methods presented.

  20. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  1. Fabrication and evaluation of a microspring contact array using a reel-to-reel continuous fiber process

    NASA Astrophysics Data System (ADS)

    Khumpuang, S.; Ohtomo, A.; Miyake, K.; Itoh, T.

    2011-10-01

    In this work a novel patterning technique for fabrication of a conductive microspring array as an electrical contact structure directly on fiber substrate is introduced. Using low-temperature compression from the nanoimprinting technique to generate a gradient depth on the desired pattern, PEDOT: PSS film, the hair-like structures are released as bimorph microspring cantilevers. The microspring is in the form of a stress-engineered cantilever arranged in rows. The microspring contact array is employed in composing the electrical circuit through a large area of woven textile, and functions as the electrical contact between weft ribbon and warp ribbon. The spring itself has a contact resistance of 480 Ω to the plain PEDOT:PSS-coated ribbon, which shows a promising electrical transfer ability within the limitations of materials employed for reel-to-reel continuous processes. The microspring contact structures enhanced the durability, flexibility and stability of electrical contact in the woven textile better than those of the ribbons without the microspring. The contact experiment was repeated over 500 times, losing only 20 Ω of the resistance. Furthermore, to realize the spring structure, CYTOP is used as the releasing layer due to its low adhesive force to the fiber substrate. Moreover the first result of patternable CYTOP using nano-imprinting lithography is included.

  2. A Module Experimental Process System Development Unit (MEPSDU). [development of low cost solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per Watt peak was demonstrated. The proposed process sequence was reviewed and laboratory verification experiments were conducted. The preliminary process includes the following features: semicrystalline silicon (10 cm by 10 cm) as the silicon input material; spray on dopant diffusion source; Al paste BSF formation; spray on AR coating; electroless Ni plate solder dip metallization; laser scribe edges; K & S tabbing and stringing machine; and laminated EVA modules.

  3. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  4. Coal liquefaction process streams characterization and evaluation: High performance liquid chromatography (HPLC) of coal liquefaction process streams using normal-phase separation with uv diode array detection

    SciTech Connect

    Clifford, D.J.; McKinney, D.E.; Hou, Lei; Hatcher, P.G.

    1994-01-01

    This study demonstrated the considerable potential of using two-dimensional, high performance liquid chromatography (HPLC) with normal-phase separation and ultraviolet (UV) diode array detection for the examination of filtered process liquids and the 850{degrees}F{sup {minus}} distillate materials derived from direct coal liquefaction process streams. A commercially available HPLC column (Hypersil Green PAH-2) provided excellent separation of the complex mixture of polynuclear aromatic hydrocarbons (PAHs) found in coal-derived process streams process. Some characteristics of the samples delineated by separation could be attributed to processing parameters. Mass recovery of the process derived samples was low (5--50 wt %). Penn State believes, however, that, improved recovery can be achieved. High resolution mass spectrometry and gas chromatography/mass spectrometry (GC/MS) also were used in this study to characterize the samples and the HPLC fractions. The GC/MS technique was used to preliminarily examine the GC-elutable portion of the samples. The GC/MS data were compared with the data from the HPLC technique. The use of an ultraviolet detector in the HPLC work precludes detecting the aliphatic portion of the sample. The GC/MS allowed for identification and quantification of that portion of the samples. Further development of the 2-D HPLC analytical method as a process development tool appears justified based on the results of this project.

  5. Multi-sensor Evolution Analysis system: how WCS/WCPS technology supports real time exploitation of geospatial data

    NASA Astrophysics Data System (ADS)

    Natali, Stefano; Mantovani, Simone; Folegani, Marco; Barboni, Damiano

    2014-05-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In the third and last year of EarthServer project, the Climate Data Service lighthouse application has been released in its full / consolidated mode. The Multi-sensor Evolution Analysis (MEA) system, the geospatial data analysis tool empowered with OGC standard, has been adopted to handle data manipulation and visualization; Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) are used to access and process ESA, NASA and third party products. Tenth of Terabytes of full-mission, multi-sensor, multi-resolution, multi-projection and cross-domain coverages are already available to user interest groups belonging Land, Ocean and Atmosphere products. The MEA system is available at https://mea.eo.esa.int. During the live demo, typical test cases implemented by User interest Groups within EarthServer and ESA Image Information Mining projects will be showed with special emphasis on the comparison of MACC Reanalysis and ESA CCI products.

  6. Flat-plate solar array project process development area process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  7. Multi-Sensor ELINT Development (MSED)

    DTIC Science & Technology

    2012-06-01

    of Electrical and Computer Engineering PO Box 6000 Binghamton, NY 13902 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING...No. 4, APR. 2011 [23] T. K. Moon and W.C. Stirling , Mathematical Methods and Algorithms for Signal Processing, Prentice-Hall, 2000. [24] L. L

  8. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  9. Automating the design of image processing pipelines for novel color filter arrays: local, linear, learned (L3) method

    NASA Astrophysics Data System (ADS)

    Tian, Qiyuan; Lansel, Steven; Farrell, Joyce E.; Wandell, Brian A.

    2014-03-01

    The high density of pixels in modern color sensors provides an opportunity to experiment with new color filter array (CFA) designs. A significant bottleneck in evaluating new designs is the need to create demosaicking, denoising and color transform algorithms tuned for the CFA. To address this issue, we developed a method(local, linear, learned or L3) for automatically creating an image processing pipeline. In this paper we describe the L3 algorithm and illustrate how we created a pipeline for a CFA organized as a 2×2 RGB/Wblock containing a clear (W) pixel. Under low light conditions, the L3 pipeline developed for the RGB/W CFA produces images that are superior to those from a matched Bayer RGB sensor. We also use L3 to learn pipelines for other RGB/W CFAs with different spatial layouts. The L3 algorithm shortens the development time for producing a high quality image pipeline for novel CFA designs.

  10. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  11. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  12. Coherent-subspace array processing based on wavelet covariance: an application to broad-band, seismo-volcanic signals

    NASA Astrophysics Data System (ADS)

    Saccorotti, G.; Nisii, V.; Del Pezzo, E.

    2008-07-01

    Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP

  13. Data Processing Requirement for a Deep Towed Multi-Channel Array.

    DTIC Science & Technology

    1979-09-26

    over 4 years, a simple linear amortization plus a 12% of initial capital annual J] rate for service and maintance indicates a cost of less that $9/km...for this system:I Capital cost /year * $1.6500 Service and Maintance + 7,920 J Total ainual system cost $24,420 Assuming 2x1480 km are processed

  14. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  15. Process Study of Oceanic Responses to Typhoons Using Arrays of EM-APEX Floats and Moorings

    DTIC Science & Technology

    2014-09-30

    processes on air–sea fluxes during tropical cyclone passage will aid understanding of storm dynamics and structure. The ocean’s recovery after tropical... cyclones derive energy from the ocean via air–sea fluxes. Oceanic heat content in the mixed layer and the air–sea enthalpy flux play important roles in...tropical cyclone forcing are surface waves, wind-driven currents, shear and turbulence, and inertial currents. Quantifying the effect of these oceanic

  16. High Speed Publication Subscription Brokering Through Highly Parallel Processing on Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2010-01-01

    and that Unix style newlines are being used. Section 2. Hardware Required for a Single Node All of the information in the multi- node hardware...AFRL-RI-RS-TR-2010-29 Final Technical Report January 2010 HIGH SPEED PUBLICATION SUBSCRIPTION BROKERING THROUGH HIGHLY PARALLEL ...2007 – August 2009 4. TITLE AND SUBTITLE HIGH SPEED PUBLICATION SUBSCRIPTION BROKERING THROUGH HIGHLY PARALLEL PROCESSING ON FIELD PROGRAMMABLE

  17. ALLFlight: multisensor data fusion for helicopter operations

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Lueken, T.

    2010-04-01

    The objective of the project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is to demonstrate and evaluate the characteristics of different sensors for helicopter operations within degraded visual environments, such as brownout or whiteout. The sensor suite, which is mounted onto DLR's research helicopter EC135 consists of standard color or black and white TV cameras, an un-cooled thermal infrared camera (EVS-1000, Max-Viz, USA), an optical radar scanner (HELLAS-W, EADS, Germany; a millimeter wave radar system (AI-130, ICx Radar Systems, Canada). Data processing is designed and realized by a sophisticated, high performance sensor co-computer (SCC) cluster architecture, which is installed into the helicopter's experimental electronic cargo bay. This paper describes applied methods and the software architecture in terms of real time data acquisition, recording, time stamping and sensor data fusion. First concepts for a pilot HMI are presented as well.

  18. Regional Drought Monitoring Based on Multi-Sensor Remote Sensing

    NASA Astrophysics Data System (ADS)

    Rhee, Jinyoung; Im, Jungho; Park, Seonyoung

    2014-05-01

    Drought originates from the deficit of precipitation and impacts environment including agriculture and hydrological resources as it persists. The assessment and monitoring of drought has traditionally been performed using a variety of drought indices based on meteorological data, and recently the use of remote sensing data is gaining much attention due to its vast spatial coverage and cost-effectiveness. Drought information has been successfully derived from remotely sensed data related to some biophysical and meteorological variables and drought monitoring is advancing with the development of remote sensing-based indices such as the Vegetation Condition Index (VCI), Vegetation Health Index (VHI), and Normalized Difference Water Index (NDWI) to name a few. The Scaled Drought Condition Index (SDCI) has also been proposed to be used for humid regions proving the performance of multi-sensor data for agricultural drought monitoring. In this study, remote sensing-based hydro-meteorological variables related to drought including precipitation, temperature, evapotranspiration, and soil moisture were examined and the SDCI was improved by providing multiple blends of the multi-sensor indices for different types of drought. Multiple indices were examined together since the coupling and feedback between variables are intertwined and it is not appropriate to investigate only limited variables to monitor each type of drought. The purpose of this study is to verify the significance of each variable to monitor each type of drought and to examine the combination of multi-sensor indices for more accurate and timely drought monitoring. The weights for the blends of multiple indicators were obtained from the importance of variables calculated by non-linear optimization using a Machine Learning technique called Random Forest. The case study was performed in the Republic of Korea, which has four distinct seasons over the course of the year and contains complex topography with a variety

  19. Research on detection method of end gap of piston rings based on area array CCD and image processing

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Wang, Zhong; Liu, Qi; Li, Lin

    2012-01-01

    Piston ring is one of the most important parts in internal combustion engine, and the width of end gap is an important parameter which should be detected one by one. In comparison to the previous measurements of end gap, a new efficient detection method is presented based on computer vision and image processing theory. This paper describes the framework and measuring principle of the measurement system. In which, the image processing algorithm is highlighted. Firstly, the partial end gap image of piston ring is acquired by the area array CCD; secondly, the end gap edge contour which is connected by single pixel is obtained by grayscale threshold segmentation, mathematical morphology contour edge detection, contour trace and other image processing tools; finally, the distance between the two end gap edge contour lines is calculated by using the least distance method of straight-line fitting. It has been proved by the repetitive experiments that the measurement accuracy can reach 0.01mm. What's more, the detection efficiency of automatic inspected instrument on parameters of piston ring based on this method can reach 10~12 pieces/min.

  20. A Dry-Etch Process for Low Temperature Superconducting Transition Edge Sensors for Far Infrared Bolometer Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Christine A.; Chervenak, James A.; Hsieh, Wen-Ting; McClanahan, Richard A.; Miller, Timothy M.; Mitchell, Robert; Moseley, S. Harvey; Staguhn, Johannes; Stevenson, Thomas R.

    2003-01-01

    The next generation of ultra-low power bolometer arrays, with applications in far infrared imaging, spectroscopy and polarimetry, utilizes a superconducting bilayer as the sensing element to enable SQUID multiplexed readout. Superconducting transition edge sensors (TES s) are being produced with dual metal systems of superconductinghormal bilayers. The transition temperature (Tc) is tuned by altering the relative thickness of the superconductor with respect to the normal layer. We are currently investigating MoAu and MoCu bilayers. We have developed a dry-etching process for MoAu TES s with integrated molybdenum leads, and are working on adapting the process to MoCu. Dry etching has the advantage over wet etching in the MoAu system in that one can achieve a high degree of selectivity, greater than 10, using argon ME, or argon ion milling, for patterning gold on molybdenum. Molybdenum leads are subsequently patterned using fluorine plasma.. The dry-etch technique results in a smooth, featureless TES with sharp sidewalls, no undercutting of the Mo beneath the normal metal, and Mo leads with high critical current. The effects of individual processing parameters on the characteristics of the transition will be reported.

  1. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  2. Flat-plate solar array project process development area, process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1984-01-01

    The program is designed to investigate the fabrication of solar cells on N-type base material by a simultaneous diffusion of N-type and P-type dopants to form an P(+)NN(+) structure. The results of simultaneous diffusion experiments are being compared to cells fabricated using sequential diffusion of dopants into N-base material in the same resistivity range. The process used for the fabrication of the simultaneously diffused P(+)NN(+) cells follows the standard Westinghouse baseline sequence for P-base material except that the two diffusion processes (boron and phosphorus) are replaced by a single diffusion step. All experiments are carried out on N-type dendritic web grown in the Westinghouse pre-pilot facility. The resistivities vary from 0.5 (UC OMEGA)cm to 5 (UC OMEGA)cm. The dopant sources used for both the simultaneous and sequential diffusion experiments are commercial metallorganic solutions with phosphorus or boron components. After these liquids are applied to the web surface, they are baked to form a hard glass which acts as a diffusion source at elevated temperatures. In experiments performed thus far, cells produced in sequential diffusion tests have properties essentially equal to the baseline N(+)PP(+) cells. However, the simultaneous diffusions have produced cells with much lower IV characteristics mainly due to cross-doping of the sources at the diffusion temperature. This cross-doping is due to the high vapor pressure phosphorus (applied as a metallorganic to the back surface) diffusion through the SiO2 mask and then acting as a diffusant source for the front surface.

  3. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    NASA Technical Reports Server (NTRS)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  4. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  5. Radar Array Processing of Experimental Data Via the Scan-MUSIC Algorithm

    DTIC Science & Technology

    2004-06-01

    hndtextsubhead.foregroundcolor = ’black’; % Input File Text position 21 uicontrol (gcf, hndtexthead,... ’position’, [.755 .90 .15 .0385],... ’fontsize’,10,’fontweight’,’bold...string’, ’Input File’); % Processing Text position uicontrol (gcf, hndtexthead,... ’position’, [.755 .80 .15 .0385],... ’fontsize’,10,’fontweight...bold’,... ’string’, ’Processing’); uicontrol (gcf, hndtextsubhead,’Horiz’,’center’,... ’position’, [.72 .76 .115 .035],... ’string’, ’Range Gate

  6. Direct growth of comet-like superstructures of Au-ZnO submicron rod arrays by solvothermal soft chemistry process

    SciTech Connect

    Shen Liming; Bao, Ningzhong Yanagisawa, Kazumichi; Zheng, Yanqing; Domen, Kazunari; Gupta, Arunava; Grimes, Craig A.

    2007-01-15

    The synthesis, characterization and proposed growth process of a new kind of comet-like Au-ZnO superstructures are described here. This Au-ZnO superstructure was directly created by a simple and mild solvothermal reaction, dissolving the reactants of zinc acetate dihydrate and hydrogen tetrachloroaurate tetrahydrate (HAuCl{sub 4}.4H{sub 2}O) in ethylenediamine and taking advantage of the lattice matching growth between definitized ZnO plane and Au plane and the natural growth habit of the ZnO rods along [001] direction in solutions. For a typical comet-like Au-ZnO superstructure, its comet head consists of one hemispherical end of a central thick ZnO rod and an outer Au-ZnO thin layer, and its comet tail consists of radially standing ZnO submicron rod arrays growing on the Au-ZnO thin layer. These ZnO rods have diameters in range of 0.2-0.5 {mu}m, an average aspect ratio of about 10, and lengths of up to about 4 {mu}m. The morphology, size and structure of the ZnO superstructures are dependent on the concentration of reactants and the reaction time. The HAuCl{sub 4}.4H{sub 2}O plays a key role for the solvothermal growth of the comet-like superstructure, and only are ZnO fibers obtained in absence of the HAuCl{sub 4}.4H{sub 2}O. The UV-vis absorption spectrum shows two absorptions at 365-390 nm and 480-600 nm, respectively attributing to the characteristic of the ZnO wide-band semiconductor material and the surface plasmon resonance of the Au particles. - Graphical abstract: One-step solvothermal synthesis of novel comet-like superstructures of radially standing ZnO submicron rod arrays.

  7. MIST Final Report: Multi-sensor Imaging Science and Technology

    SciTech Connect

    Lind, Michael A.; Medvick, Patricia A.; Foley, Michael G.; Foote, Harlan P.; Heasler, Patrick G.; Thompson, Sandra E.; Nuffer, Lisa L.; Mackey, Patrick S.; Barr, Jonathan L.; Renholds, Andrea S.

    2008-03-15

    The Multi-sensor Imaging Science and Technology (MIST) program was undertaken to advance exploitation tools for Long Wavelength Infra Red (LWIR) hyper-spectral imaging (HSI) analysis as applied to the discovery and quantification of nuclear proliferation signatures. The program focused on mitigating LWIR image background clutter to ease the analyst burden and enable a) faster more accurate analysis of large volumes of high clutter data, b) greater detection sensitivity of nuclear proliferation signatures (primarily released gasses) , and c) quantify confidence estimates of the signature materials detected. To this end the program investigated fundamental limits and logical modifications of the more traditional statistical discovery and analysis tools applied to hyperspectral imaging and other disciplines, developed and tested new software incorporating advanced mathematical tools and physics based analysis, and demonstrated the strength and weaknesses of the new codes on relevant hyperspectral data sets from various campaigns. This final report describes the content of the program and the outlines the significant results.

  8. Distributed multisensor blackboard system for an autonomous robot

    NASA Astrophysics Data System (ADS)

    Kappey, Dietmar; Pokrandt, Peter; Schloen, Jan

    1994-10-01

    Sensoric data enable a robotic system to react to events occurring in its environment. Much work has been done on the development of various sensors and algorithms to extract information from an environment. On the other hand, only little work has been done in the field of multisensor communication. This paper presents a shared memory based communication protocol that has been developed for the autonomous robot system KAMRO. This system consists of two PUMA 260 manipulators and an omnidirectionally driven mobile platform. The proposed approach is based on logical sensors, which can be used to dynamically build hierarchical sensor units. The protocol uses a distributed blackboard structure for the transmission of sensor data and commands. To support asynchronous coupling of robots and sensors, it not only transfers single sensor values, but also offers functions to estimate future values.

  9. Human engineering of multisensor and multisource tracking systems

    NASA Astrophysics Data System (ADS)

    Svenmarck, Peter

    2000-08-01

    A pressing concern in modern fighter aircraft cockpit design is how to present and reduce large amounts of information obtained from several sensor observations of the same object. Currently, sensor observations are presented individually as overlays or in different displays requiring the pilot to control each sensor and integrate observations. The increased number of sensors and communication networks covering extensive ranges has, however, led to an unacceptable situation that hampers pilots' situation awareness and decision-making. Therefore, some from of automatic information management is necessary to support the pilot. Although considerable technological research has been conducted on automatic sensor fusion and management of multisensor and multi source tracking systems, only little is known about how to integrate systems capabilities with pilots' decision-making.

  10. Development of a process for high capacity arc heater production of silicon for solar arrays

    NASA Technical Reports Server (NTRS)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  11. Air Enquirer's multi-sensor boxes as a tool for High School Education and Atmospheric Research

    NASA Astrophysics Data System (ADS)

    Morguí, Josep-Anton; Font, Anna; Cañas, Lidia; Vázquez-García, Eusebi; Gini, Andrea; Corominas, Ariadna; Àgueda, Alba; Lobo, Agustin; Ferraz, Carlos; Nofuentes, Manel; Ulldemolins, Delmir; Roca, Alex; Kamnang, Armand; Grossi, Claudia; Curcoll, Roger; Batet, Oscar; Borràs, Silvia; Occhipinti, Paola; Rodó, Xavier

    2016-04-01

    An educational tool was designed with the aim of making more comprehensive the research done on Greenhouse Gases (GHGs) in the ClimaDat Spanish network of atmospheric observation stations (www.climadat.es). This tool is called Air Enquirer and it consist of a multi-sensor box. It is envisaged to build more than two hundred boxes to yield them to the Spanish High Schools through the Education department (www.educaixa.com) of the "Obra Social 'La Caixa'", who funds this research. The starting point for the development of the Air Enquirers was the experience at IC3 (www.ic3.cat) in the CarboSchools+ FP7 project (www.carboschools.cat, www.carboschools.eu). The Air Enquirer's multi-sensor box is based in Arduino's architecture and contains sensors for CO2, temperature, relative humidity, pressure, and both infrared and visible luminance. The Air Enquirer is designed for taking continuous measurements. Every Air Enquirer ensemble of measurements is used to convert values to standard units (water content in ppmv, and CO2 in ppmv_dry). These values are referred to a calibration made with Cavity Ring Down Spectrometry (Picarro®) under different temperature, pressure, humidity and CO2 concentrations. Multiple sets of Air Enquirers are intercalibrated for its use in parallel during the experiments. The different experiments proposed to the students will be outdoor (observational) or indoor (experimental, in the lab) focusing on understanding the biogeochemistry of GHGs in the ecosystems (mainly CO2), the exchange (flux) of gases, the organic matter production, respiration and decomposition processes, the influence of the anthropogenic activities on the gases (and particles) exchanges, and their interaction with the structure and composition of the atmosphere (temperature, water content, cooling and warming processes, radiative forcing, vertical gradients and horizontal patterns). In order to ensure Air Enquirers a high-profile research performance the experimental designs

  12. Investigation and process optimization of SONOS cell's drain disturb in 2-transistor structure flash arrays

    NASA Astrophysics Data System (ADS)

    Xu, Zhaozhao; Qian, Wensheng; Chen, Hualun; Xiong, Wei; Hu, Jun; Liu, Donghua; Duan, Wenting; Kong, Weiran; Na, Wei; Zou, Shichang

    2017-03-01

    The mechanism and distribution of drain disturb (DD) are investigated in silicon-oxide-nitride-oxide-silicon (SONOS) flash cells. It is shown that DD is the only concern in this paper. First, the distribution of trapped charge in nitride layer is found to be non-localized (trapped in entire nitride layer along the channel) after programming. Likewise, the erase is also non-localized. Then, the main disturb mechanism: Fowler Nordheim tunneling (FNT) has been confirmed in this paper with negligible disturb effect from hot-hole injection (HHI). And then, distribution of DD is confirmed to be non-localized similarly, which denotes that DD exists in entire tunneling oxide (Oxide for short). Next, four process optimization ways are proposed for minimization of DD, and VTH shift is measured. It reveals that optimized lightly doped drain (LDD), halo, and channel implant are required for the fabrication of a robust SONOS cell. Finally, data retention and endurance of the optimized SONOS are demonstrated.

  13. A continuous process to align electrospun nanofibers into parallel and crossed arrays

    NASA Astrophysics Data System (ADS)

    Laudenslager, Michael J.; Sigmund, Wolfgang M.

    2013-04-01

    Electrical, optical, and mechanical properties of nanofibers are strongly affected by their orientation. Electrospinning is a nanofiber processing technique that typically produces nonwoven meshes of randomly oriented fibers. While several alignment techniques exist, they are only able to produce either a very thin layer of aligned fibers or larger quantities of fibers with less control over their alignment and orientation. The technique presented herein fills the gap between these two methods allowing one to produce thick meshes of highly oriented nanofibers. In addition, this technique is not limited to collection of fibers along a single axis. Modifications to the basic setup allow collection of crossed fibers without stopping and repositioning the apparatus. The technique works for a range of fiber sizes. In this study, fiber diameters ranged from 100 nm to 1 micron. This allows a few fibers at a time to rapidly deposit in alternating directions creating an almost woven structure. These aligned nanofibers have the potential to improve the performance of energy storage and thermoelectric devices and hold great promise for directed cell growth applications.

  14. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    NASA Technical Reports Server (NTRS)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  15. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  16. Optimization of processing parameters on the controlled growth of ZnO nanorod arrays for the performance improvement of solid-state dye-sensitized solar cells

    SciTech Connect

    Lee, Yi-Mu; Yang, Hsi-Wen

    2011-03-15

    High-transparency and high quality ZnO nanorod arrays were grown on the ITO substrates by a two-step chemical bath deposition (CBD) method. The effects of processing parameters including reaction temperature (25-95 {sup o}C) and solution concentration (0.01-0.1 M) on the crystal growth, alignment, optical and electrical properties were systematically investigated. It has been found that these process parameters are critical for the growth, orientation and aspect ratio of the nanorod arrays, showing different structural and optical properties. Experimental results reveal that the hexagonal ZnO nanorod arrays prepared under reaction temperature of 95 {sup o}C and solution concentration of 0.03 M possess highest aspect ratio of {approx}21, and show the well-aligned orientation and optimum optical properties. Moreover the ZnO nanorod arrays based heterojunction electrodes and the solid-state dye-sensitized solar cells (SS-DSSCs) were fabricated with an improved optoelectrical performance. -- Graphical abstract: The ZnO nanorod arrays demonstrate well-alignment, high aspect ratio (L/D{approx}21) and excellent optical transmittance by low-temperature chemical bath deposition (CBD). Display Omitted Research highlights: > Investigate the processing parameters of CBD on the growth of ZnO nanorod arrays. > Optimization of CBD process parameters: 0.03 M solution concentration and reaction temperature of 95 {sup o}C. > The prepared ZnO samples possess well-alignment and high aspect ratio (L/D{approx}21). > An n-ZnO/p-NiO heterojunction: great rectifying behavior and low leakage current. > SS-DSSC has J{sub SC} of 0.31 mA/cm{sup 2} and V{sub OC} of 590 mV, and an improved {eta} of 0.059%.

  17. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2017-01-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 s the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 s, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2 per cent) and significantly higher (>20 per cent), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves and tilt noise.

  18. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2016-10-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multi-component, multi-station noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  19. Mapping Palm Swamp Wetland Ecosystems in the Peruvian Amazon: a Multi-Sensor Remote Sensing Approach

    NASA Astrophysics Data System (ADS)

    Podest, E.; McDonald, K. C.; Schroeder, R.; Pinto, N.; Zimmerman, R.; Horna, V.

    2012-12-01

    Wetland ecosystems are prevalent in the Amazon basin, especially in northern Peru. Of specific interest are palm swamp wetlands because they are characterized by constant surface inundation and moderate seasonal water level variation. This combination of constantly saturated soils and warm temperatures year-round can lead to considerable methane release to the atmosphere. Because of the widespread occurrence and expected sensitivity of these ecosystems to climate change, it is critical to develop methods to quantify their spatial extent and inundation state in order to assess their carbon dynamics. Spatio-temporal information on palm swamps is difficult to gather because of their remoteness and difficult accessibility. Spaceborne microwave remote sensing is an effective tool for characterizing these ecosystems since it is sensitive to surface water and vegetation structure and allows monitoring large inaccessible areas on a temporal basis regardless of atmospheric conditions or solar illumination. We developed a remote sensing methodology using multi-sensor remote sensing data from the Advanced Land Observing Satellite (ALOS) Phased Array L-Band Synthetic Aperture Radar (PALSAR), Shuttle Radar Topography Mission (SRTM) DEM, and Landsat to derive maps at 100 meter resolution of palm swamp extent and inundation based on ground data collections; and combined active and passive microwave data from AMSR-E and QuikSCAT to derive inundation extent at 25 kilometer resolution on a weekly basis. We then compared information content and accuracy of the coarse resolution products relative to the high-resolution datasets. The synergistic combination of high and low resolution datasets allowed for characterization of palm swamps and assessment of their flooding status. This work has been undertaken partly within the framework of the JAXA ALOS Kyoto & Carbon Initiative. PALSAR data have been provided by JAXA. Portions of this work were carried out at the Jet Propulsion Laboratory

  20. Scalable processes for fabricating non-volatile memory devices using self-assembled 2D arrays of gold nanoparticles as charge storage nodes.

    PubMed

    Muralidharan, Girish; Bhat, Navakanta; Santhanam, Venugopal

    2011-11-01

    We propose robust and scalable processes for the fabrication of floating gate devices using ordered arrays of 7 nm size gold nanoparticles as charge storage nodes. The proposed strategy can be readily adapted for fabricating next generation (sub-20 nm node) non-volatile memory devices.

  1. Multi-sensor image interpretation using laser radar and thermal images

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau; Aggarwal, J. K.

    1991-03-01

    A knowledge based system is presented which interprets registered laser radar and thermal images. The object is to detect and recognize man-made objects at kilometer range in outdoor scenes. The multisensor fusion approach is applied to various sensing modalities (range, intensity, velocity, and thermal) to improve both image segmentation and interpretation. The ability to use multiple sensors greatly helps an intelligent platform to understand and interact with its environment. The knowledge-based interpretation system, AIMS, is constructed using KEE and Lisp. Low-level attributes of image segments (regions) are computed by the segmentation modules and then converted into the KEE format. The interpretation system applies forward chaining in a bottom-up fashion to derive object-level interpretations from data bases generated by low-level processing modules. Segments are grouped into objects and then objects are classified into predefined categories. AIMS employs a two tiered software structure. The efficiency of AIMS is enhanced by transferring nonsymbolic processing tasks to a concurrent service manager (program). Therefore, tasks with different characteristics are executed using different software tools and methodologies.

  2. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  3. Development of a parallel detection and processing system using a multidetector array for wave field restoration in scanning transmission electron microscopy.

    PubMed

    Taya, Masaki; Matsutani, Takaomi; Ikuta, Takashi; Saito, Hidekazu; Ogai, Keiko; Harada, Yoshihito; Tanaka, Takeo; Takai, Yoshizo

    2007-08-01

    A parallel image detection and image processing system for scanning transmission electron microscopy was developed using a multidetector array consisting of a multianode photomultiplier tube arranged in an 8 x 8 square array. The system enables the taking of 64 images simultaneously from different scattered directions with a scanning time of 2.6 s. Using the 64 images, phase and amplitude contrast images of gold particles on an amorphous carbon thin film could be separately reconstructed by applying respective 8 shaped bandpass Fourier filters for each image and multiplying the phase and amplitude reconstructing factors.

  4. Aerosol Intercomparison Scenarios for the Giovanni Multi-sensor Data Synergy “Advisor”

    NASA Astrophysics Data System (ADS)

    Lloyd, S. A.; Leptoukh, G. G.; Prados, A. I.; Shen, S.; Pan, J.; Rui, H.; Lynnes, C.; Fox, P. A.; West, P.; Zednik, S.

    2009-12-01

    The combination of remotely sensed aerosols datasets can result in synergistic products that are more useful than the sum of the individual datasets. Multi-sensor composite datasets can be constructed by data merging (taking very closely related parameters to create a single merged dataset to increase spatial and/or temporal coverage), cross-calibration (creating long-term climate data records from two very similar parameters), validation (using a parameter from one dataset to validate a closely related parameter in another), cross-comparison (comparing two datasets with different parameters), and data fusion (using two or more parameters to estimate a third parameter). However, care must be taken to note the differences in data provenance and quality when combining heterogeneous datasets. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is currently in its first year of funding for our project Multi-sensor Data Synergy Advisor (MDSA or Giovanni Advisor) under the NASA Earth Science Technology Office (ESTO) Advanced Information Systems and Technology (AIST) program. The Giovanni Advisor will allow researchers to combine and compare aerosol data from multiple sensors using Giovanni, such that scientifically and statistically valid conclusions can be drawn. The Giovanni Advisor will assist the user in determining how to match up two (or more) sets of data that are related, yet significantly different in some way: in the exact phenomenon being measured, the measurement technique, or the location in space-time and/or the quality of the measurements. Failing to account for these differences in merging, validation, cross calibration, comparison or fusion is likely to yield scientifically dubious results. The Giovanni Advisor captures details of each parameter’s attributes, metadata, retrieval heritage, provenance and data quality and flags relevant differences so that the user can make appropriate “apples to apples” comparisons of

  5. Magnetic arrays

    SciTech Connect

    Trumper, David L.; Kim, Won-jong; Williams, Mark E.

    1997-05-20

    Electromagnet arrays which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness.

  6. Magnetic arrays

    DOEpatents

    Trumper, D.L.; Kim, W.; Williams, M.E.

    1997-05-20

    Electromagnet arrays are disclosed which can provide selected field patterns in either two or three dimensions, and in particular, which can provide single-sided field patterns in two or three dimensions. These features are achieved by providing arrays which have current densities that vary in the windings both parallel to the array and in the direction of array thickness. 12 figs.

  7. Airborne UXO Surveys Using the Multi-Sensor Towed Array Detection System (MTADS)

    DTIC Science & Technology

    2005-07-01

    position update (at 20 Hz) with an accuracy in the horizontal plane of about 5 cm. GPS satellite clock time is used to time-stamp both position and...FBO fixed base operator FUDS former used defense sites GIS geographic information system GP general purpose GPS Global Positioning System...operation on all models of the Bell Long Ranger helicopter. Two global positioning system ( GPS ) units mounted on the forward boom provide

  8. Sampled Longest Common Prefix Array

    NASA Astrophysics Data System (ADS)

    Sirén, Jouni

    When augmented with the longest common prefix (LCP) array and some other structures, the suffix array can solve many string processing problems in optimal time and space. A compressed representation of the LCP array is also one of the main building blocks in many compressed suffix tree proposals. In this paper, we describe a new compressed LCP representation: the sampled LCP array. We show that when used with a compressed suffix array (CSA), the sampled LCP array often offers better time/space trade-offs than the existing alternatives. We also show how to construct the compressed representations of the LCP array directly from a CSA.

  9. Multi-sensor fusion techniques for state estimation of micro air vehicles

    NASA Astrophysics Data System (ADS)

    Donavanik, Daniel; Hardt-Stremayr, Alexander; Gremillion, Gregory; Weiss, Stephan; Nothwang, William

    2016-05-01

    Aggressive flight of micro air vehicles (MAVs) in unstructured, GPS-denied environments poses unique challenges for estimation of vehicle pose and velocity due to the noise, delay, and drift in individual sensor measurements. Maneuvering flight at speeds in excess of 5 m/s poses additional challenges even for active range sensors; in the case of LIDAR, an assembled scan of the vehicles environment will in most cases be obsolete by the time it is processed. Multi-sensor fusion techniques which combine inertial measurements with passive vision techniques and/or LIDAR have achieved breakthroughs in the ability to maintain accurate state estimates without the use of external positioning sensors. In this paper, we survey algorithmic approaches to exploiting sensors with a wide range of nonlinear dynamics using filter and bundle-adjustment based approaches for state estimation and optimal control. From this foundation, we propose a biologically-inspired framework for incorporating the human operator in the loop as a privileged sensor in a combined human/autonomy paradigm.

  10. Multi-Sensor Observations of Earthquake Related Atmospheric Signals over Major Geohazard Validation Sites

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Davindenko, D.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, ionospheric electron density, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.

  11. A multi-sensor remote sensing approach for measuring primary production from space

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine

    1989-01-01

    It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.

  12. a Meteorological Risk Assessment Method for Power Lines Based on GIS and Multi-Sensor Integration

    NASA Astrophysics Data System (ADS)

    Lin, Zhiyong; Xu, Zhimin

    2016-06-01

    Power lines, exposed in the natural environment, are vulnerable to various kinds of meteorological factors. Traditional research mainly deals with the influence of a single meteorological condition on the power line, which lacks of comprehensive effects evaluation and analysis of the multiple meteorological factors. In this paper, we use multiple meteorological monitoring data obtained by multi-sensors to implement the meteorological risk assessment and early warning of power lines. Firstly, we generate meteorological raster map from discrete meteorological monitoring data using spatial interpolation. Secondly, the expert scoring based analytic hierarchy process is used to compute the power line risk index of all kinds of meteorological conditions and establish the mathematical model of meteorological risk. By adopting this model in raster calculator of ArcGIS, we will have a raster map showing overall meteorological risks for power line. Finally, by overlaying the power line buffer layer to that raster map, we will get to know the exact risk index around a certain part of power line, which will provide significant guidance for power line risk management. In the experiment, based on five kinds of observation data gathered from meteorological stations in Guizhou Province of China, including wind, lightning, rain, ice, temperature, we carry on the meteorological risk analysis for the real power lines, and experimental results have proved the feasibility and validity of our proposed method.

  13. Daily Life Event Segmentation for Lifestyle Evaluation Based on Multi-Sensor Data Recorded by a Wearable Device*

    PubMed Central

    Li, Zhen; Wei, Zhiqiang; Jia, Wenyan; Sun, Mingui

    2013-01-01

    In order to evaluate people’s lifestyle for health maintenance, this paper presents a segmentation method based on multi-sensor data recorded by a wearable computer called eButton. This device is capable of recording more than ten hours of data continuously each day in multimedia forms. Automatic processing of the recorded data is a significant task. We have developed a two-step summarization method to segment large datasets automatically. At the first step, motion sensor signals are utilized to obtain candidate boundaries between different daily activities in the data. Then, visual features are extracted from images to determine final activity boundaries. It was found that some simple signal measures such as the combination of a standard deviation measure of the gyroscope sensor data at the first step and an image HSV histogram feature at the second step produces satisfactory results in automatic daily life event segmentation. This finding was verified by our experimental results. PMID:24110323

  14. The New Pelagic Operational Observatory of the Catalan Sea (OOCS) for the Multisensor Coordinated Measurement of Atmospheric and Oceanographic Conditions

    PubMed Central

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39′N, 2°54′E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the

  15. The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the multisensor coordinated measurement of atmospheric and oceanographic conditions.

    PubMed

    Bahamon, Nixon; Aguzzi, Jacopo; Bernardello, Raffaele; Ahumada-Sempoal, Miguel-Angel; Puigdefabregas, Joan; Cateura, Jordi; Muñoz, Eduardo; Velásquez, Zoila; Cruzado, Antonio

    2011-01-01

    The new pelagic Operational Observatory of the Catalan Sea (OOCS) for the coordinated multisensor measurement of atmospheric and oceanographic conditions has been recently installed (2009) in the Catalan Sea (41°39'N, 2°54'E; Western Mediterranean) and continuously operated (with minor maintenance gaps) until today. This multiparametric platform is moored at 192 m depth, 9.3 km off Blanes harbour (Girona, Spain). It is composed of a buoy holding atmospheric sensors and a set of oceanographic sensors measuring the water conditions over the upper 100 m depth. The station is located close to the head of the Blanes submarine canyon where an important multispecies pelagic and demersal fishery gives the station ecological and economic relevance. The OOCS provides important records on atmospheric and oceanographic conditions, the latter through the measurement of hydrological and biogeochemical parameters, at depths with a time resolution never attained before for this area of the Mediterranean. Twenty four moored sensors and probes operating in a coordinated fashion provide important data on Essential Ocean Variables (EOVs; UNESCO) such as temperature, salinity, pressure, dissolved oxygen, chlorophyll fluorescence, and turbidity. In comparison with other pelagic observatories presently operating in other world areas, OOCS also measures photosynthetic available radiation (PAR) from above the sea surface and at different depths in the upper 50 m. Data are recorded each 30 min and transmitted in real-time to a ground station via GPRS. This time series is published and automatically updated at the frequency of data collection on the official OOCS website (http://www.ceab.csic.es/~oceans). Under development are embedded automated routines for the in situ data treatment and assimilation into numerical models, in order to provide a reliable local marine processing forecast. In this work, our goal is to detail the OOCS multisensor architecture in relation to the coordinated

  16. Reliable sources and uncertain decisions in multisensor systems

    NASA Astrophysics Data System (ADS)

    Minor, Christian; Johnson, Kevin

    2015-05-01

    Conflict among information sources is a feature of fused multisource and multisensor systems. Accordingly, the subject of conflict resolution has a long history in the literature of data fusion algorithms such as that of Dempster-Shafer theory (DS). Most conflict resolution strategies focus on distributing the conflict among the elements of the frame of discernment (the set of hypotheses that describe the possible decisions for which evidence is obtained) through rescaling of the evidence. These "closed-world" strategies imply that conflict is due to the uncertainty in evidence sources stemming from their reliability. An alternative approach is the "open-world" hypothesis, which allows for the presence of "unknown" elements not included in the original frame of discernment. Here, conflict must be considered as a result of uncertainty in the frame of the discernment, rather than solely the province of evidence sources. Uncertainty in the operating environment of a fused system is likely to appear as an open-world scenario. Understanding the origin of conflict (source versus frame of discernment uncertainty) is a challenging area for research in fused systems. Determining the ratio of these uncertainties provides useful insights into the operation of fused systems and confidence in their decisions for a variety of operating environments. Results and discussion for the computation of these uncertainties are presented for several combination rules with simulated data sets.

  17. Multi-sensor Testing for Automated Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Carrington, Connie K.

    2008-01-01

    During the past two years, many sensors have been tested in an open-loop fashion in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL) to both determine their suitability for use in Automated Rendezvous and Docking (AR&D) systems and to ensure the test facility is prepared for future multi-sensor testing. The primary focus of this work was in support of the CEV AR&D system, because the AR&D sensor technology area was identified as one of the top risks in the program. In 2006, four different sensors were tested individually or in a pair in the MSFC FRL. In 2007, four sensors, two each of two different types, were tested simultaneously. In each set of tests, the target was moved through a series of pre-planned trajectories while the sensor tracked it. In addition, a laser tracker "truth" sensor also measured the target motion. The tests demonstrated the functionality of testing four sensors simultaneously as well as the capabilities (both good and bad) of all of the different sensors tested. This paper outlines the test setup and conditions, briefly describes the facility, summarizes the earlier results of the individual sensor tests, and describes in some detail the results of the four-sensor testing. Post-test analysis includes data fusion by minimum variance estimation and sequential Kalman filtering. This Sensor Technology Project work was funded by NASA's Exploration Technology Development Program.

  18. Irma 5.2 multi-sensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Pau, John

    2008-04-01

    The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/RW) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after extensive verification and validation of an upgraded and reengineered ladar channel. The reengineering effort then shifted focus to the Irma passive channel. Field measurements for the validation effort include both polarized and unpolarized data collection. Irma 5.2 was released in 2007 with a reengineered passive channel. This paper summarizes the capabilities of Irma and the progress toward Irma 5.3, which includes a reengineered radar channel.

  19. Evaluating fusion techniques for multi-sensor satellite image data

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data. Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.

  20. Multisensor monitoring of drilling in advanced fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Okafor, Anthony C.; El-Gizawy, A. Sherif; Enemuoh, E. U.

    1997-06-01

    This paper investigates the main effects of drilling parameters (cutting speed, feed rate, tool geometry, and tool material) on cutting force and hole quality during drilling of magnamite graphite fiber reinforced polyether ether ketone (AS4/PEEK) composites. The AS4/PEEK is a one hundred and ninety nine-ply [0 degree(s)/45 degree(s)/90 degree(s)/-45 degree(s)]s laminate composite. Taguchi orthogonal array L9 technique is used to plan a 34 robust experiment. The workpiece is supported on a fixture and mounted on a 3-component piezoelectric transducer Kistler type 9257A model. The response signals (cutting force and acoustic emission) are acquired simultaneously during the drilling experiments. The signals are instantaneously sampled and stored in a pentium computer for later processing. The digitized signals are processed in time domain. Surface profilometer is used to measure the surface roughness of the drilled holes. The optimum drilling condition is determined by meticulous examination of the drilling parameter's main effects. The responses are analyzed based on Taguchi's signal-to-noise ratio as opposed to the measurement data and analysis of variance. The results show that sensor signals, delamination and surface roughness measurements are well correlated with the drilling parameters. Optimum drill tool materials, drill point angle and cutting conditions have been determined.

  1. Arrays of holes fabricated by electron-beam lithography combined with image reversal process using nickel pulse reversal plating

    NASA Astrophysics Data System (ADS)

    Awad, Yousef; Lavallée, Eric; Lau, Kien Mun; Beauvais, Jacques; Drouin, Dominique; Cloutier, Melanie; Turcotte, David; Yang, Pan; Kelkar, Prasad

    2004-05-01

    A critical issue in fabricating arrays of holes is to achieve high-aspect-ratio structures. Formation of ordered arrays of nanoholes in silicon nitride was investigated by the use of ultrathin hard etch mask formed by nickel pulse reversal plating to invert the tonality of a dry e-beam resist patterned by e-beam lithography. Ni plating was carried out using a commercial plating solution based on nickel sulfamate salt without organic additives. Reactive ion etching using SF6/CH4 was found to be very effective for pattern transfer to silicon nitride. Holes array of 100 nm diam, 270 nm period, and 400 nm depth was fabricated on a 5×5 mm2 area. .

  2. Enhanced processing in arrays of optimally tuned nonlinear biomimetic sensors: A coupling-mediated Ringelmann effect and its dynamical mitigation

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexander P.; Bulsara, Adi R.; Stocks, Nigel G.

    2017-03-01

    Inspired by recent results on self-tunability in the outer hair cells of the mammalian cochlea, we describe an array of magnetic sensors where each individual sensor can self-tune to an optimal operating regime. The self-tuning gives the array its "biomimetic" features. We show that the overall performance of the array can, as expected, be improved by increasing the number of sensors but, however, coupling between sensors reduces the overall performance even though the individual sensors in the system could see an improvement. We quantify the similarity of this phenomenon to the Ringelmann effect that was formulated 103 years ago to account for productivity losses in human and animal groups. We propose a global feedback scheme that can be used to greatly mitigate the performance degradation that would, normally, stem from the Ringelmann effect.

  3. Kokkos Array

    SciTech Connect

    Edwards Daniel Sunderland, Harold Carter

    2012-09-12

    The Kokkos Array library implements shared-memory array data structures and parallel task dispatch interfaces for data-parallel computational kernels that are performance-portable to multicore-CPU and manycore-accelerator (e.g., GPGPU) devices.

  4. Statistical generation of training sets for measuring NO3(-), NH4(+) and major ions in natural waters using an ion selective electrode array.

    PubMed

    Mueller, Amy V; Hemond, Harold F

    2016-05-18

    Knowledge of ionic concentrations in natural waters is essential to understand watershed processes. Inorganic nitrogen, in the form of nitrate and ammonium ions, is a key nutrient as well as a participant in redox, acid-base, and photochemical processes of natural waters, leading to spatiotemporal patterns of ion concentrations at scales as small as meters or hours. Current options for measurement in situ are costly, relying primarily on instruments adapted from laboratory methods (e.g., colorimetric, UV absorption); free-standing and inexpensive ISE sensors for NO3(-) and NH4(+) could be attractive alternatives if interferences from other constituents were overcome. Multi-sensor arrays, coupled with appropriate non-linear signal processing, offer promise in this capacity but have not yet successfully achieved signal separation for NO3(-) and NH4(+)in situ at naturally occurring levels in unprocessed water samples. A novel signal processor, underpinned by an appropriate sensor array, is proposed that overcomes previous limitations by explicitly integrating basic chemical constraints (e.g., charge balance). This work further presents a rationalized process for the development of such in situ instrumentation for NO3(-) and NH4(+), including a statistical-modeling strategy for instrument design, training/calibration, and validation. Statistical analysis reveals that historical concentrations of major ionic constituents in natural waters across New England strongly covary and are multi-modal. This informs the design of a statistically appropriate training set, suggesting that the strong covariance of constituents across environmental samples can be exploited through appropriate signal processing mechanisms to further improve estimates of minor constituents. Two artificial neural network architectures, one expanded to incorporate knowledge of basic chemical constraints, were tested to process outputs of a multi-sensor array, trained using datasets of varying degrees of

  5. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2007-03-13

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  6. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2009-08-11

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  7. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Hua, H.

    2012-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  8. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  9. BreedVision — A Multi-Sensor Platform for Non-Destructive Field-Based Phenotyping in Plant Breeding

    PubMed Central

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C.; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  10. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities`` lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ``detail enhancement,`` wherein the relative information content of the original images is less rich than the desired representation; (2) ``data enhancement,`` wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ``conceptual enhancement,`` wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  11. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    SciTech Connect

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features.

  12. Performance Assessment of Multi-Array Processing with Ground Truth for Infrasonic, Seismic and Seismo-Acoustic Events

    DTIC Science & Technology

    2012-07-03

    INTRODUCTION AND SUMMARY OF RESEARCH ............................................................1 2. MULTIPLE-ARRAY DETECTION ASESSMENT AND...RELATIONSHIP TO ENVIRONMENTAL CONDITIONS .............................................................................................2 2.1 Abstract...followed by the systematic application of the procedures to seismo-acoustic data in Korea and the western US during the final phase. The optimization of

  13. Formation of an array of ordered nanocathodes based on carbon nanotubes by nanoimprint lithography and PECVD processes

    SciTech Connect

    Gromov, D. G.; Shulyat’ev, A. S. Egorkin, V. I.; Zaitsev, A. A.; Skorik, S. N.; Galperin, V. A.; Pavlov, A. A.; Shamanaev, A. A.

    2014-12-15

    Technology for the production of an array of ordered nanoemitters based on carbon nanotubes is developed. The technological parameters of the fabrication of carbon nanotubes are chosen. It is shown that the structures produced exhibit field electron emission with an emission current of 8 μA and a threshold voltage of 80 V.

  14. Multi-Sensor Analysis of Overshooting Tops in Tornadic Storms

    NASA Astrophysics Data System (ADS)

    Magee, N. B.; Goldberg, R.; Hartline, M.

    2012-12-01

    The disastrous 2011 tornado season focused much attention on the ~75% false alarm rate for NWS-issued tornado warnings. Warnings are correctly issued on ~80% of verified tornados, but the false alarm rate has plateaued at near 75%. Any additional clues that may signal tornadogenesis would be of great benefit to the public welfare. We have performed statistical analyses of the structure and time-evolution of convective overshooting tops for tornadic storms occurring in the continental United States since 2006. An amalgam of case studies and theory has long suggested that overshooting tops may often collapse just prior to the onset of tornado touchdown. Our new results suggest that this view is supported by a broad set of new statistical evidence. Our approach to the analysis makes use of a high resolution, multi-sensor data set, and seeks to gather statistics on a large set of storms. Records of 88-D NEXRAD radar Enhanced-Resolution Echo Tops (product available since 2009) have been analyzed for an hour prior to and following touchdown of all EF1 and stronger storms. In addition, a coincidence search has been performed for the NASA A-Train satellite suite and tornadic events since 2006. Although the paths of the polar-orbiting satellites do not aid in analyses of temporal storm-top evolution, Aqua-MODIS, CALIPSO, and Cloud-Sat have provided a detailed structural picture of overshooting tops in tornadic and non-tornadic supercell thunderstorms. 250 m resolution AQUA-MODIS image at 1950Z on 4/27/2011, color-enhanced to emphasize overshooting tops during tornado outbreak.

  15. Detection of multiple airborne targets from multisensor data

    NASA Astrophysics Data System (ADS)

    Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf

    1995-08-01

    Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.

  16. Adaptive Multi-sensor Data Fusion Model for In-situ Exploration of Mars

    NASA Astrophysics Data System (ADS)

    Schneiderman, T.; Sobron, P.

    2014-12-01

    Laser Raman spectroscopy (LRS) and laser-induced breakdown spectroscopy (LIBS) can be used synergistically to characterize the geochemistry and mineralogy of potential microbial habitats and biosignatures. The value of LRS and LIBS has been recognized by the planetary science community: (i) NASA's Mars2020 mission features a combined LRS-LIBS instrument, SuperCam, and an LRS instrument, SHERLOC; (ii) an LRS instrument, RLS, will fly on ESA's 2018 ExoMars mission. The advantages of combining LRS and LIBS are evident: (1) LRS/LIBS can share hardware components; (2) LIBS reveals the relative concentration of major (and often trace) elements present in a sample; and (3) LRS yields information on the individual mineral species and their chemical/structural nature. Combining data from LRS and LIBS enables definitive mineral phase identification with precise chemical characterization of major, minor, and trace mineral species. New approaches to data processing are needed to analyze large amounts of LRS+LIBS data efficiently and maximize the scientific return of integrated measurements. Multi-sensor data fusion (MSDF) is a method that allows for robust sample identification through automated acquisition, processing, and combination of data. It optimizes information usage, yielding a more robust characterization of a target than could be acquired through single sensor use. We have developed a prototype fuzzy logic adaptive MSDF model aimed towards the unsupervised characterization of Martian habitats and their biosignatures using LRS and LIBS datasets. Our model also incorporates fusion of microimaging (MI) data - critical for placing analyses in geological and spatial context. Here, we discuss the performance of our novel MSDF model and demonstrate that automated quantification of the salt abundance in sulfate/clay/phyllosilicate mixtures is possible through data fusion of collocated LRS, LIBS, and MI data.

  17. Multi-sensor approach to retrieving water cloud physical properties and drizzle fraction

    NASA Astrophysics Data System (ADS)

    Prianto Rusli, Stephanie; Donovan, David; Russchenberg, Herman

    2015-04-01

    Accurately representing clouds and their interaction with the surrounding matter and radiation are one of the most important factors in climate modeling. In particular, feedback processes involving low level water clouds play a significant role in determining the net effect of cloud climate forcing. An accurate description of cloud physical properties is therefore necessary to quantify these processes and their implications. To this end, measurements combined from a variety of remote sensing instruments at different wavelengths provide crucial information about the clouds. To exploit this, building upon previous work in this field, we have developed a ground-based multi-sensor retrieval algorithm within an optimal estimation framework. The inverse problem of 'translating' the radar, lidar, and microwave radiometer measurements into retrieval products is formulated in a physically consistent manner, without relying on approximate empirical proxies (such as explicit liquid water content vs radar reflectivity factor relationships). We apply the algorithm to synthetic signals based on the output of large eddy simulation model runs and present here the preliminary results. Given temperature, humidity profiles, information from the measurements, and apriori contraints, we derive the liquid water content profile. Assuming a monomodal gamma droplet size distribution, the number concentration, effective size of the cloud droplets and the extinction coefficient are computed. The retrieved profiles provide a good fit to the true ones. The algorithm is being improved to take into account the presence of drizzle, an important aspect that affects cloud lifetime. Quantifying the amount of drizzle would enable the proper use of the radar reflectivity. Further development to allow retrieval of temperature and humidity profiles as well is anticipated.

  18. Economical custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.; Smith, A.; Ramondetta, P.; Noto, R.; Lombardi, T.

    1976-01-01

    Automatic design technique uses standard circuit cells for producing large-scale integrated arrays. Computerized fabrication process provides individual cells of high density and efficiency, quick turnaround time, low cost, and ease of corrections for changes and errors.

  19. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    NASA Astrophysics Data System (ADS)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  20. Development of an Operational Multi-sensor and Multi-channel Aerosol Assimilation Package

    DTIC Science & Technology

    2011-08-18

    contamination, especially cirrus cloud contamination, is still a problem for the MISR aerosol product. Therefore, quality assurance and quality check...the Cloud -Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO). With knowledge gained from the multi-sensor analysis, the long-term... Cloud -Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) aerosol products {Zhang et al, BACIMO, 2010; Zhang et al, Aerosol

  1. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  2. Newtonian Imperialist Competitve Approach to Optimizing Observation of Multiple Target Points in Multisensor Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Afghan-Toloee, A.; Heidari, A. A.; Joibari, Y.

    2013-09-01

    The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP). The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm) is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  3. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  4. Tantalum (oxy)nitrides nanotube arrays for the degradation of atrazine in vis-Fenton-like process.

    PubMed

    Du, Yingxun; Zhao, Lu; Chang, Yuguang; Su, Yaling

    2012-07-30

    In order to overcome the limitation of the application of nanoparticles, tantalum (oxy)nitrides nanotube arrays on a Ta foil were synthesized and introduced in vis (visible light)-Fenton-like system to enhance the degradation of atrazine. At first, the anodization of tantalum foil in a mild electrolyte solution containing ethylene glycol and water (v:v=2:1) plus 0.5wt.% NH(4)F produced tantala nanotubes with an average diameter of 30nm and a length of approximately 1μm. Then the nitridation of tantala nanotube arrays resulted in the replacement of N atoms to O atoms to form tantalum (oxy)nitrides (TaON and Ta(3)N(5)), as testified by XRD and XPS analyses. The synthesized tantalum (oxy)nitrides nanotubes absorb well in the visible region up to 600nm. Under visible light, tantalum (oxy)nitrides nanotube arrays were catalytically active for Fe(3+) reduction. With tantalum (oxy)nitrides nanotube arrays, the degradation of atrazine and the formation of the intermediates in vis/Fe(3+)/H(2)O(2) system were significantly accelerated. This was explained by the higher concentration of Fe(2+) and thus the faster decomposition of H(2)O(2) with tantalum (oxy)nitrides nanotubes. In addition, tantalum (oxy)nitrides nanotubes exhibited stable performance during atrazine degradation for three runs. The good performance and stability of the tantalum (oxy)nitrides nanotubes film with the convenient separation, suggest that this film is a promising catalyst for vis-Fenton-like degradation.

  5. Process of in situ forming well-aligned zinc oxide nanorod arrays on wood substrate using a two-step bottom-up method.

    PubMed

    Liu, Yongzhuang; Fu, Yanchun; Yu, Haipeng; Liu, Yixing

    2013-10-01

    A good nanocrystal covering layer on wood can serve as a protective coating and present some new surface properties. In this study, well-aligned ZnO nanorods (NRs) arrays were successfully grown on wood surface through a two-step bottom-up growth process. The process involved pre-sow seeds and subsequently their growing into NRs under hydrothermal environment. The interface incorporation between wood and ZnO colloid particles in the precursor solution during the seeding process was analyzed and demonstrated through a schematic. The growth process of forming well-aligned ZnO NRs was analyzed by field-emission scanning electron microscopy and X-ray diffraction, which showed that the NRs elongated with increased reaction time. The effects of ZnO crystal form and capping agent on the growth process were studied through different viewpoints.

  6. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  7. Multi-sensor control for 6-axis active vibration isolation

    NASA Astrophysics Data System (ADS)

    Thayer, Douglas Gary

    The goal of this research is to look at the two different parts of the challenge of active vibration isolation. First is the hardware that will be used to accomplish the task and improve performance. The cubic hexapod, or Stewart platform, has become a popular solution to the problem because of its ability to provide 6-axis vibration isolation with a relatively simple configuration. A number of these hexapods have been constructed at different research facilities around the country to address different missions, each with their own approach. Hood Technology Corporation and the University of Washington took the lessons learned from these designs and developed a new hexapod that addresses the requirements of the Jet Propulsion Laboratory's planned space borne interferometry missions. This system has unique mechanical design details and is built with 4 sensors in each strut. This, along with a real time computer to implement controllers, allows for a great deal of flexibility in controller design and research into sensor selection. Other unique design features include a very soft axial stiffness, a custom designed voice coil actuator with a large displacement capability and elastomeric flexures both for guiding the actuator and providing pivot points on each strut. The second part, and the primary area of this research, is to examine multi-sensor control strategies in an effort to improve the performance of the controllers, their stability and/or how implementable they are. Up to this point, the primary method of control for systems of this type has been classical, designing single-input, single output controller loops to be closed around each strut. But because of the geometry of the hexapod and the different problems that can occur with some sensors, the classical approach is limited in what it can accomplish. This research shows the benefits to be gained by going to a multiple sensor controller and implementing controllers that are designed using a frequency

  8. Autonomous Multi-Sensor Coordination: The Science Goal Monitor

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Hess, Melissa; Jones, Jeremy

    2004-01-01

    Many dramatic earth phenomena are dynamic and coupled. In order to fully understand them, we need to obtain timely coordinated multi-sensor observations from widely dispersed instruments. Such a dynamic observing system must include the ability to Schedule flexibly and react autonomously to sciencehser driven events; Understand higher-level goals of a sciencehser defined campaign; Coordinate various space-based and ground-based resources/sensors effectively and efficiently to achieve goals. In order to capture transient events, such a 'sensor web' system must have an automated reactive capability built into its scientific operations. To do this, we must overcome a number of challenges inherent in infusing autonomy. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe dynamic phenomena. The SGM system enables users to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of data to identify occurrences of the key events previously specified by the scientisther. When an event occurs, the system autonomously coordinates the execution of the users' desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened. Our paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems' Aqua/Terra spacecrafts' MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image

  9. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  10. Finite difference calculation of acoustic streaming including the boundary layer phenomena in an ultrasonic air pump on graphics processing unit array

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Koyama, Daisuke; Nakamura, Kentaro

    2012-09-01

    Direct finite difference fluid simulation of acoustic streaming on the fine-meshed threedimension model by graphics processing unit (GPU)-oriented calculation array is discussed. Airflows due to the acoustic traveling wave are induced when an intense sound field is generated in a gap between a bending transducer and a reflector. Calculation results showed good agreement with the measurements in the pressure distribution. In addition to that, several flow-vortices were observed near the boundary of the reflector and the transducer, which have been often discussed in acoustic tube near the boundary, and have not yet been observed in the calculation in the ultrasonic air pump of this type.

  11. Damage-free top-down processes for fabricating two-dimensional arrays of 7 nm GaAs nanodiscs using bio-templates and neutral beam etching.

    PubMed

    Wang, Xuan-Yu; Huang, Chi-Hsien; Tsukamoto, Rikako; Mortemousque, Pierre-Andre; Itoh, Kohei M; Ohno, Yuzo; Samukawa, Seiji

    2011-09-07

    The first damage-free top-down fabrication processes for a two-dimensional array of 7 nm GaAs nanodiscs was developed by using ferritin (a protein which includes a 7 nm diameter iron core) bio-templates and neutral beam etching. The photoluminescence of GaAs etched with a neutral beam clearly revealed that the processes could accomplish defect-free etching for GaAs. In the bio-template process, to remove the ferritin protein shell without thermal damage to the GaAs, we firstly developed an oxygen-radical treatment method with a low temperature of 280 °C. Then, the neutral beam etched the defect-free nanodisc structure of the GaAs using the iron core as an etching mask. As a result, a two-dimensional array of GaAs quantum dots with a diameter of ∼ 7 nm, a height of ∼ 10 nm, a high taper angle of 88° and a quantum dot density of more than 7 × 10(11) cm(-2) was successfully fabricated without causing any damage to the GaAs.

  12. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    The energy consumed in manufacturing silicon solar cell modules was calculated for the current process, as well as for 1982 and 1986 projected processes. In addition, energy payback times for the above three sequences are shown. The module manufacturing energy was partitioned two ways. In one way, the silicon reduction, silicon purification, sheet formation, cell fabrication, and encapsulation energies were found. In addition, the facility, equipment, processing material and direct material lost-in-process energies were appropriated in junction formation processes and full module manufacturing sequences. A brief methodology accounting for the energy of silicon wafers lost-in-processing during cell manufacturing is described.

  13. SNP Arrays

    PubMed Central

    Louhelainen, Jari

    2016-01-01

    The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays) focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays. PMID:27792140

  14. Site-Selective Controlled Dealloying Process of Gold-Silver Nanowire Array: a Simple Approach towards Long-Term Stability and Sensitivity Improvement of SERS Substrate.

    PubMed

    Wiriyakun, Natta; Pankhlueab, Karuna; Boonrungsiman, Suwimon; Laocharoensuk, Rawiwan

    2016-12-13

    Limitations of achieving highly sensitive and stable surface-enhanced Raman scattering (SERS) substrate greatly concern the suitable method for fabrication of large-area plasmonic nanostructures. Herein we report a simple approach using template-based synthesis to create a highly ordered two-dimensional array of gold-silver alloy nanowires, followed by the controlled dealloying process. This particular step of mild acid etching (15%v/v nitric acid for 5 min) allowed the formation of Raman hot spots on the nanowire tips while maintaining the integrity of highly active alloy composition and rigid nanowire array structure. Full consideration of SERS substrate performance was accomplished using 4-mercaptobenzoic acid (4-MBA) as a probe molecule. Exceedingly higher SERS signal (150-fold) can be achieved with respect to typical gold film substrate. Moreover, an excellent stability of SERS substrate was also determined for over 3 months storage time. In contrast to the previous studies which stability improvement was accomplished at a cost of sensitivity reduction, the simultaneous improvement of sensitivity and stability makes the controlled dealloying process an excellent choice of SERS substrate fabrication. In addition, uniformity and reproducibility studies indicated satisfactory results with the acceptable values of relative standard deviation.

  15. Site-Selective Controlled Dealloying Process of Gold-Silver Nanowire Array: a Simple Approach towards Long-Term Stability and Sensitivity Improvement of SERS Substrate

    PubMed Central

    Wiriyakun, Natta; Pankhlueab, Karuna; Boonrungsiman, Suwimon; Laocharoensuk, Rawiwan

    2016-01-01

    Limitations of achieving highly sensitive and stable surface-enhanced Raman scattering (SERS) substrate greatly concern the suitable method for fabrication of large-area plasmonic nanostructures. Herein we report a simple approach using template-based synthesis to create a highly ordered two-dimensional array of gold-silver alloy nanowires, followed by the controlled dealloying process. This particular step of mild acid etching (15%v/v nitric acid for 5 min) allowed the formation of Raman hot spots on the nanowire tips while maintaining the integrity of highly active alloy composition and rigid nanowire array structure. Full consideration of SERS substrate performance was accomplished using 4-mercaptobenzoic acid (4-MBA) as a probe molecule. Exceedingly higher SERS signal (150-fold) can be achieved with respect to typical gold film substrate. Moreover, an excellent stability of SERS substrate was also determined for over 3 months storage time. In contrast to the previous studies which stability improvement was accomplished at a cost of sensitivity reduction, the simultaneous improvement of sensitivity and stability makes the controlled dealloying process an excellent choice of SERS substrate fabrication. In addition, uniformity and reproducibility studies indicated satisfactory results with the acceptable values of relative standard deviation. PMID:27958367

  16. Site-Selective Controlled Dealloying Process of Gold-Silver Nanowire Array: a Simple Approach towards Long-Term Stability and Sensitivity Improvement of SERS Substrate

    NASA Astrophysics Data System (ADS)

    Wiriyakun, Natta; Pankhlueab, Karuna; Boonrungsiman, Suwimon; Laocharoensuk, Rawiwan

    2016-12-01

    Limitations of achieving highly sensitive and stable surface-enhanced Raman scattering (SERS) substrate greatly concern the suitable method for fabrication of large-area plasmonic nanostructures. Herein we report a simple approach using template-based synthesis to create a highly ordered two-dimensional array of gold-silver alloy nanowires, followed by the controlled dealloying process. This particular step of mild acid etching (15%v/v nitric acid for 5 min) allowed the formation of Raman hot spots on the nanowire tips while maintaining the integrity of highly active alloy composition and rigid nanowire array structure. Full consideration of SERS substrate performance was accomplished using 4-mercaptobenzoic acid (4-MBA) as a probe molecule. Exceedingly higher SERS signal (150-fold) can be achieved with respect to typical gold film substrate. Moreover, an excellent stability of SERS substrate was also determined for over 3 months storage time. In contrast to the previous studies which stability improvement was accomplished at a cost of sensitivity reduction, the simultaneous improvement of sensitivity and stability makes the controlled dealloying process an excellent choice of SERS substrate fabrication. In addition, uniformity and reproducibility studies indicated satisfactory results with the acceptable values of relative standard deviation.

  17. CdS and CdS/CdSe sensitized ZnO nanorod array solar cells prepared by a solution ions exchange process

    SciTech Connect

    Chen, Ling; Gong, Haibo; Zheng, Xiaopeng; Zhu, Min; Zhang, Jun; Yang, Shikuan; Cao, Bingqiang

    2013-10-15

    Graphical abstract: - Highlights: • CdS and CdS/CdSe quantum dots are assembled on ZnO nanorods by ion exchange process. • The CdS/CdSe sensitization of ZnO effectively extends the absorption spectrum. • The performance of ZnO/CdS/CdSe cell is improved by extending absorption spectrum. - Abstract: In this paper, cadmium sulfide (CdS) and cadmium sulfide/cadmium selenide (CdS/CdSe) quantum dots (QDs) are assembled onto ZnO nanorod arrays by a solution ion exchange process for QD-sensitized solar cell application. The morphology, composition and absorption properties of different photoanodes were characterized with scanning electron microscope, transmission electron microscope, energy-dispersive X-ray spectrum and Raman spectrum in detail. It is shown that conformal and uniform CdS and CdS/CdSe shells can grow on ZnO nanorod cores. Quantum dot sensitized solar cells based on ZnO/CdS and ZnO/CdS/CdSe nanocable arrays were assembled with gold counter electrode and polysulfide electrolyte solution. The CdS/CdSe sensitization of ZnO can effectively extend the absorption spectrum up to 650 nm, which has a remarkable impact on the performance of a photovoltaic device by extending the absorption spectrum. Preliminary results show one fourth improvement in solar cell efficiency.

  18. Development of array-type atmospheric-pressure RF plasma generator with electric on-off control for high-throughput numerically controlled processes

    NASA Astrophysics Data System (ADS)

    Takei, H.; Kurio, S.; Matsuyama, S.; Yamauchi, K.; Sano, Y.

    2016-10-01

    An array-type atmospheric-pressure radio-frequency (RF) plasma generator is proposed for high-precision and high-throughput numerically controlled (NC) processes. We propose the use of a metal-oxide-semiconductor field-effect transistor (MOSFET) circuit for direct RF switching to achieve plasma on-off control. We confirmed that this type of circuit works correctly using a MOSFET with a small parasitic capacitance between its source and gate. We examined the design method for the distance between adjacent electrodes, which corresponds to the parasitic capacitance between adjacent electrodes and is very important in the individual on-off control of each electrode. We developed a prototype array-type plasma generator apparatus with 19 electrodes and the same number of MOSFET circuits; we then confirmed that each electrode could control its plasma on-off state individually. We also demonstrated that the thickness uniformity of the surface Si layer of a silicon-on-insulator wafer could be processed to less than 1 nm peak to valley by the NC sacrificial oxidation method using the apparatus.

  19. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2006-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and

  20. A Vision for an International Multi-Sensor Snow Observing Mission

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Discussions within the international snow remote sensing community over the past two years have led to encouraging consensus regarding the broad outlines of a dedicated snow observing mission. The primary consensus - that since no single sensor type is satisfactory across all snow types and across all confounding factors, a multi-sensor approach is required - naturally leads to questions about the exact mix of sensors, required accuracies, and so on. In short, the natural next step is to collect such multi-sensor snow observations (with detailed ground truth) to enable trade studies of various possible mission concepts. Such trade studies must assess the strengths and limitations of heritage as well as newer measurement techniques with an eye toward natural sensitivity to desired parameters such as snow depth and/or snow water equivalent (SWE) in spite of confounding factors like clouds, lack of solar illumination, forest cover, and topography, measurement accuracy, temporal and spatial coverage, technological maturity, and cost.

  1. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  2. Airborne Multisensor Pod System, Arms control and nonproliferation technologies: Second quarter 1995

    SciTech Connect

    Alonzo, G M; Sanford, N M

    1995-01-01

    This issue focuses on the Airborne Multisensor Pod System (AMPS) which is a collaboration of many of the DOE national laboratories to provide a scientific environment to research multiple sensors and the new information that can be derived from them. The bulk of the research has been directed at nonproliferation applications, but it has also proven useful in environmental monitoring and assessment, and land/water management. The contents of this issue are: using AMPS technology to detect proliferation and monitor resources; combining multisensor data to monitor facilities and natural resources; planning a AMPS mission; SAR pod produces images day or night, rain or shine; MSI pod combines data from multiple sensors; ESI pod will analyze emissions and effluents; and accessing AMPS information on the Internet.

  3. A Flash-ADC data acquisition system developed for a drift chamber array and a digital filter algorithm for signal processing

    NASA Astrophysics Data System (ADS)

    Yi, Han; Lü, Li-Ming; Zhang, Zhao; Cheng, Wen-Jing; Ji, Wei; Huang, Yan; Zhang, Yan; Li, Hong-Jie; Cui, Yin-Ping; Lin, Ming; Wang, Yi-Jie; Duan, Li-Min; Hu, Rong-Jiang; Xiao, Zhi-Gang

    2016-11-01

    A Flash-ADC data acquisition (DAQ) system has been developed for the drift chamber array designed for the External-Target-Experiment at the Cooling Storage Ring at the Heavy Ion Research Facility, Lanzhou. The simplified readout electronics system has been developed using the Flash-ADC modules and the whole waveform in the sampling window is obtained, with which the time and energy information can be deduced with an offline processing. A digital filter algorithm has been developed to discriminate the noise and the useful signal. With the digital filtering process, the signal to noise ratio (SNR) is increased and a better time and energy resolution can be obtained. Supported by National Basic Research Program of China (973) (2015CB856903 and 2014CB845405), partly by National Science Foundation of China (U1332207 and 11375094), and by Tsinghua University Initiative Scientific Research Program

  4. Enthalpy arrays

    PubMed Central

    Torres, Francisco E.; Kuhn, Peter; De Bruyker, Dirk; Bell, Alan G.; Wolkin, Michal V.; Peeters, Eric; Williamson, James R.; Anderson, Gregory B.; Schmitz, Gregory P.; Recht, Michael I.; Schweizer, Sandra; Scott, Lincoln G.; Ho, Jackson H.; Elrod, Scott A.; Schultz, Peter G.; Lerner, Richard A.; Bruce, Richard H.

    2004-01-01

    We report the fabrication of enthalpy arrays and their use to detect molecular interactions, including protein–ligand binding, enzymatic turnover, and mitochondrial respiration. Enthalpy arrays provide a universal assay methodology with no need for specific assay development such as fluorescent labeling or immobilization of reagents, which can adversely affect the interaction. Microscale technology enables the fabrication of 96-detector enthalpy arrays on large substrates. The reduction in scale results in large decreases in both the sample quantity and the measurement time compared with conventional microcalorimetry. We demonstrate the utility of the enthalpy arrays by showing measurements for two protein–ligand binding interactions (RNase A + cytidine 2′-monophosphate and streptavidin + biotin), phosphorylation of glucose by hexokinase, and respiration of mitochondria in the presence of 2,4-dinitrophenol uncoupler. PMID:15210951

  5. Hybrid approach to data reduction for multi-sensor hot wires

    NASA Technical Reports Server (NTRS)

    Hooper, C. L.; Westphal, R. V.

    1991-01-01

    A hybrid approach to implementing the calibration equations for a multisensor hot-wire probe is discussed. The approach combines some of the speed of a look-up approach with the moderate storage requirements of direct calculation based on functional fitting. Particular attention is given to timing and storage comparisons for an X-wire probe. The method depends on the oft-employed concept of an effective cooling velocity which is a function only of the bridge output voltage.

  6. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential

  7. Array tomography: production of arrays.

    PubMed

    Micheva, Kristina D; O'Rourke, Nancy; Busse, Brad; Smith, Stephen J

    2010-11-01

    Array tomography is a volumetric microscopy method based on physical serial sectioning. Ultrathin sections of a plastic-embedded tissue are cut using an ultramicrotome, bonded in an ordered array to a glass coverslip, stained as desired, and imaged. The resulting two-dimensional image tiles can then be reconstructed computationally into three-dimensional volume images for visualization and quantitative analysis. The minimal thickness of individual sections permits high-quality rapid staining and imaging, whereas the array format allows reliable and convenient section handling, staining, and automated imaging. Also, the physical stability of the arrays permits images to be acquired and registered from repeated cycles of staining, imaging, and stain elution, as well as from imaging using multiple modalities (e.g., fluorescence and electron microscopy). Array tomography makes it possible to visualize and quantify previously inaccessible features of tissue structure and molecular architecture. However, careful preparation of the tissue is essential for successful array tomography; these steps can be time consuming and require some practice to perfect. This protocol describes the sectioning of embedded tissues and the mounting of the serial arrays. The procedures require some familiarity with the techniques used for ultramicrotome sectioning for electron microscopy.

  8. Infrared Arrays

    NASA Astrophysics Data System (ADS)

    McLean, I.; Murdin, P.

    2000-11-01

    Infrared arrays are small electronic imaging devices subdivided into a grid or `array' of picture elements, or pixels, each of which is made of a material sensitive to photons (ELECTROMAGNETIC RADIATION) with wavelengths much longer than normal visible light. Typical dimensions of currently available devices are about 27-36 mm square, and formats now range from 2048×2048 pixels for the near-infra...

  9. Functional response of osteoblasts in functionally gradient titanium alloy mesh arrays processed by 3D additive manufacturing.

    PubMed

    Nune, K C; Kumar, A; Misra, R D K; Li, S J; Hao, Y L; Yang, R

    2017-02-01

    We elucidate here the osteoblasts functions and cellular activity in 3D printed interconnected porous architecture of functionally gradient Ti-6Al-4V alloy mesh structures in terms of cell proliferation and growth, distribution of cell nuclei, synthesis of proteins (actin, vinculin, and fibronectin), and calcium deposition. Cell culture studies with pre-osteoblasts indicated that the interconnected porous architecture of functionally gradient mesh arrays was conducive to osteoblast functions. However, there were statistically significant differences in the cellular response depending on the pore size in the functionally gradient structure. The interconnected porous architecture contributed to the distribution of cells from the large pore size (G1) to the small pore size (G3), with consequent synthesis of extracellular matrix and calcium precipitation. The gradient mesh structure significantly impacted cell adhesion and influenced the proliferation stage, such that there was high distribution of cells on struts of the gradient mesh structure. Actin and vinculin showed a significant difference in normalized expression level of protein per cell, which was absent in the case of fibronectin. Osteoblasts present on mesh struts formed a confluent sheet, bridging the pores through numerous cytoplasmic extensions. The gradient mesh structure fabricated by electron beam melting was explored to obtain fundamental insights on cellular activity with respect to osteoblast functions.

  10. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  11. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.

    PubMed

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-11-06

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.

  12. Adaptive fusion of multisensor precipitation using Gaussian-scale mixtures in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Ardeshir Mohammad; Foufoula-Georgiou, Efi

    2011-11-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multisensor precipitation data, including global spaceborne active and passive sensors, regional ground-based weather surveillance radars, and local rain gauges. Optimal integration of these multisensor data promises a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydrologic applications. In this context, a new framework is proposed for multiscale multisensor precipitation data fusion which capitalizes on two main observations: (1) non-Gaussian statistics of precipitation images, which are concisely parameterized in the wavelet domain via a class of Gaussian-scale mixtures, and (2) the conditionally Gaussian and weakly correlated local representation of remotely sensed precipitation data in the wavelet domain, which allows for exploiting the efficient linear estimation methodologies while capturing the non-Gaussian data structure of rainfall. The proposed methodology is demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar aboard the Tropical Rainfall Measurement Mission satellite and by ground-based weather surveillance Doppler radars.

  13. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  14. GACEM: Genetic Algorithm Based Classifier Ensemble in a Multi-sensor System

    PubMed Central

    Xu, Rongwu; He, Lin

    2008-01-01

    Multi-sensor systems (MSS) have been increasingly applied in pattern classification while searching for the optimal classification framework is still an open problem. The development of the classifier ensemble seems to provide a promising solution. The classifier ensemble is a learning paradigm where many classifiers are jointly used to solve a problem, which has been proven an effective method for enhancing the classification ability. In this paper, by introducing the concept of Meta-feature (MF) and Trans-function (TF) for describing the relationship between the nature and the measurement of the observed phenomenon, classification in a multi-sensor system can be unified in the classifier ensemble framework. Then an approach called Genetic Algorithm based Classifier Ensemble in Multi-sensor system (GACEM) is presented, where a genetic algorithm is utilized for optimization of both the selection of features subset and the decision combination simultaneously. GACEM trains a number of classifiers based on different combinations of feature vectors at first and then selects the classifiers whose weight is higher than the pre-set threshold to make up the ensemble. An empirical study shows that, compared with the conventional feature-level voting and decision-level voting, not only can GACEM achieve better and more robust performance, but also simplify the system markedly. PMID:27873866

  15. Multi-sensor for measuring erythemally weighted irradiance in various directions simultaneously

    NASA Astrophysics Data System (ADS)

    Appelbaum, J.; Peleg, I.; Peled, A.

    2016-10-01

    Estimating the ultraviolet-B (UV-B) solar irradiance and its angular distribution is a matter of interest to both research and commercial institutes. A static multi-sensor instrument is developed in this paper for a simultaneous measuring of the sky and the reflected erythemally weighted UV-B irradiance on multiple inclined surfaces. The instrument employs a pre-developed simple solar irradiance model and a minimum mean square error method to estimate the various irradiance parameters. The multi-sensor instrument comprises a spherical shaped apparatus with the UV-B sensors mounted as follows: seven sky-facing sensors to measure the hemispherical sky irradiance and six sensors facing downwards to measure the reflection from ground. This work aims to devise and outline an elementary, low-cost multi-sensor instrument. The sensor may usefully serve research, commercial, and medical institutes to sample and measure the UV-B irradiance on horizontal as well as on inclined surfaces. The various UV-B calculations for inclined surfaces are aided by the sensor's integrated software.

  16. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  17. Analysis and Evaluation of Processes and Equipment in Tasks 2 and 4 of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    The significant economic data for the current production multiblade wafering and inner diameter slicing processes were tabulated and compared to data on the experimental and projected multiblade slurry, STC ID diamond coated blade, multiwire slurry and crystal systems fixed abrasive multiwire slicing methods. Cost calculations were performed for current production processes and for 1982 and 1986 projected wafering techniques.

  18. Quality assessment of crude and processed ginger by high-performance liquid chromatography with diode array detection and mass spectrometry combined with chemometrics.

    PubMed

    Deng, Xianmei; Yu, Jiangyong; Zhao, Ming; Zhao, Bin; Xue, Xingyang; Che, ChunTao; Meng, Jiang; Wang, Shumei

    2015-09-01

    A sensitive, simple, and validated high-performance liquid chromatography with diode array detection and mass spectrometry detection method was developed for three ginger-based traditional Chinese herbal drugs, Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata. Chemometrics methods, such as principal component analysis, hierarchical cluster analysis, and analysis of variance, were also employed in the data analysis. The results clearly revealed significant differences among Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata, indicating variations in their chemical compositions during the processing, which may elucidate the relationship of the thermal treatment with the change of the constituents and interpret their different clinical uses. Furthermore, the sample consistency of Zingiberis Rhizoma, Zingiberis Rhizome Preparatum, and Zingiberis Rhizome Carbonisata can also be visualized by high-performance liquid chromatography with diode array detection and mass spectrometry analysis followed by principal component analysis/hierarchical cluster analysis. The comprehensive strategy of liquid chromatography with mass spectrometry analysis coupled with chemometrics should be useful in quality assurance for ginger-based herbal drugs and other herbal medicines.

  19. Hybrid simulation of the Z-pinch instabilities for profiles generated in the process of wire array implosion in the Saturn pulsed power generator.

    SciTech Connect

    Coverdale, Christine Anne; Travnicek, P.; Hellinger, P.; Fiala, V.; Leboeuf, J. N.; Deeney, Christopher; Sotnikov, Vladimir Isaakovich

    2005-02-01

    Experimental evidence suggests that the energy balance between processes in play during wire array implosions is not well understood. In fact the radiative yields can exceed by several times the implosion kinetic energy. A possible explanation is that the coupling from magnetic energy to kinetic energy as magnetohydrodynamic plasma instabilities develop provides additional energy. It is thus important to model the instabilities produced in the after implosion stage of the wire array in order to determine how the stored magnetic energy can be connected with the radiative yields. To this aim three-dimensional hybrid simulations have been performed. They are initialized with plasma radial density profiles, deduced in recent experiments [C. Deeney et al., Phys. Plasmas 6, 3576 (1999)] that exhibited large x-ray yields, together with the corresponding magnetic field profiles. Unlike previous work, these profiles do not satisfy pressure balance and differ substantially from those of a Bennett equilibrium. They result in faster growth with an associated transfer of magnetic energy to plasma motion and hence kinetic energy.

  20. NEUSORT2.0: a multiple-channel neural signal processor with systolic array buffer and channel-interleaving processing schedule.

    PubMed

    Chen, Tung-Chien; Yang, Zhi; Liu, Wentai; Chen, Liang-Gee

    2008-01-01

    An emerging class of neuroprosthetic devices aims to provide aggressive performance by integrating more complicated signal processing hardware into the neural recording system with a large amount of electrodes. However, the traditional parallel structure duplicating one neural signal processor (NSP) multiple times for multiple channels takes a heavy burden on chip area. The serial structure sequentially switching the processing task between channels requires a bulky memory to store neural data and may has a long processing delay. In this paper, a memory hierarchy of systolic array buffer is proposed to support signal processing interleavingly channel by channel in cycle basis to match up with the data flow of the optimized multiple-channel frontend interface circuitry. The NSP can thus be tightly coupled to the analog frontend interface circuitry and perform signal processing for multiple channels in real time without any bulky memory. Based on our previous one-channel NSP of NEUSORT1.0 [1], the proposed memory hierarchy is realized on NEUSORT2.0 for a 16-channel neural recording system. Compared to 16 of NEUSORT1.0, NEUSORT2.0 demonstrates a 81.50% saving in terms of areaxpower factor.

  1. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Goldman, H.

    1981-01-01

    The attributes of the various metallization processes were investigated. It is shown that several metallization process sequences will lead to adequate metallization for large area, high performance solar cells at a metallization add on price in the range of $6. to 12. m squared, or 4 to $.8/W(peak), assuming 15% efficiency. Conduction layer formation by thick film silver or by tin or tin/lead solder leads to metallization add-on prices significantly above the $6. to 12/m squared range c.) The wet chemical processes of electroless and electrolytic plating for strike/barrier layer and conduction layer formation, respectively, seem to be most cost effective.

  2. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1981-01-01

    The effect of solar cell metallization pattern design on solar cell performance and the costs and performance effects of different metallization processes are discussed. Definitive design rules for the front metallization pattern for large area solar cells are presented. Chemical and physical deposition processes for metallization are described and compared. An economic evaluation of the 6 principal metallization options is presented. Instructions for preparing Format A cost data for solar cell manufacturing processes from UPPC forms for input into the SAMIC computer program are presented.

  3. Analysis and evaluation of processes and equipment in tasks 2 and 4 of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that the specific add-on costs of the Cz-process can be expected to be reduced by about a factor of three by 1982, and about a factor of five by 1986. A format to guide in the accumulation of the data needed for thorough techno-economic analysis of solar cell production processes was developed.

  4. Low cost solar array project. Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Technical activities are reported in the design of process, facilities, and equipment for producing silicon at a rate and price comensurate with production goals for low cost solar cell modules. The silane-silicone process has potential for providing high purity poly-silicon on a commercial scale at a price of fourteen dollars per kilogram by 1986, (1980 dollars). Commercial process, economic analysis, process support research and development, and quality control are discussed.

  5. Analysis and Evaluation of Processes and Equipment in Tasks 2 and 4 of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1979-01-01

    To facilitate the task of objectively comparing competing process options, a methodology was needed for the quantitative evaluation of their relative cost effectiveness. Such a methodology was developed and is described, together with three examples for its application. The criterion for the evaluation is the cost of the energy produced by the system. The method permits the evaluation of competing design options for subsystems, based on the differences in cost and efficiency of the subsystems, assuming comparable reliability and service life, or of competing manufacturing process options for such subsystems, which include solar cells or modules. This process option analysis is based on differences in cost, yield, and conversion efficiency contribution of the process steps considered.

  6. Analysis and evaluation of process and equipment in tasks 2 and 4 of the Low Cost Solar Array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that substantial cost reductions can be realized from technical advancements which fall into four categories: an increase in furnace productivity; the reduction of crucible cost through use of the crucible for the equivalent of multiple state-of-the-art crystals; the combined effect of several smaller technical improvements; and a carry over effect of the expected availability of semiconductor grade polysilicon at greatly reduced prices. A format for techno-economic analysis of solar cell production processes was developed, called the University of Pennsylvania Process Characterization (UPPC) format. The accumulated Cz process data are presented.

  7. Analysis and evaluation in the production process and equipment area of the Low-Cost Solar Array Project

    SciTech Connect

    Wolf, M.

    1980-07-01

    The solar cell metallization processes show a wide range of technical limitations, which influence solar cell performance. These limitations interact with the metallization pattern design, which is particularly critical for large square or round cells. To lay the basis for a process capability-cost-solar cell performance-value evaluation and trade-off study, the theoretical background of the metallization design-solar cell performance relationship was examined. Conclusions are presented. (WHK)

  8. A composite hydrogels-based photonic crystal multi-sensor

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Zhu, Zhigang; Zhu, Xiangrong; Yu, Wei; Liu, Mingju; Ge, Qiaoqiao; Shih, Wei-Heng

    2015-04-01

    A facile route to prepare stimuli-sensitive poly(vinyl alcohol)/poly(acrylic acid) (PVA/PAA) gelated crystalline colloidal array photonic crystal material was developed. PVA was physically gelated by utilizing an ethanol-assisted method, the resulting hydrogel/crystal composite film was then functionalized with PAA to form an interpenetrating hydrogel film. This sensor film is able to efficiently diffract the visible light and rapidly respond to various environmental stimuli such as solvent, pH and strain, and the accompanying structural color shift can be repeatedly changed and easily distinguished by naked eye.

  9. Microlens arrays

    NASA Astrophysics Data System (ADS)

    Hutley, Michael C.; Stevens, Richard F.; Daly, Daniel J.

    1992-04-01

    Microlenses have been with us for a long time as indeed the very word lens reminds us. Many early lenses,including those made by Hooke and Leeuwenhoek in the 17th century were small and resembled lentils. Many languages use the same word for both (French tilentillelt and German "Linse") and the connection is only obscure in English because we use the French word for the vegetable and the German for the optic. Many of the applications for arrays of inicrolenses are also well established. Lippmann's work on integral photography at the turn of the century required lens arrays and stimulated an interest that is very much alive today. At one stage, lens arrays played an important part in high speed photography and various schemes have been put forward to take advantage of the compact imaging properties of combinations of lens arrays. The fact that many of these ingenious schemes have not been developed to their full potential has to a large degree been due to the absence of lens arrays of a suitable quality and cost.

  10. Intensive time series data exploitation: the Multi-sensor Evolution Analysis (MEA) platform

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Folegani, Marco; Scremin, Alessandro

    2014-05-01

    The monitoring of the temporal evolution of natural phenomena must be performed in order to ensure their correct description and to allow improvements in modelling and forecast capabilities. This assumption, that is obvious for ground-based measurements, has not always been true for data collected through space-based platforms: except for geostationary satellites and sensors, that allow providing a very effective monitoring of phenomena with geometric scale from regional to global; smaller phenomena (with characteristic dimension lower than few kilometres) have been monitored with instruments that could collect data only with a time interval in the order of several days; bi-temporal techniques have been the most used ones for years, in order to characterise temporal changes and try identifying specific phenomena. The more the number of flying sensor has grown and their performance improved, the more their capability of monitoring natural phenomena at a smaller geographic scale has grown: we can now count on tenth of years of remotely sensed data, collected by hundreds of sensors that are now accessible from a wide users' community, and the techniques for data processing have to be adapted to move toward a data intensive exploitation. Starting from 2008, the European Space Agency has initiated the development of the Multi-sensor Evolution Analysis (MEA) platform (https://mea.eo.esa.int), whose first aim was to permit the access and exploitation of long term remotely sensed satellite data from different platforms: 15 years of global (A)ATSR data together with 5 years of regional AVNIR-2 data were loaded into the system and were used, through a web-based graphic user interface, for land cover change analysis. The MEA data availability has grown during years integrating multi-disciplinary data that feature spatial and temporal dimensions: so far tenths of Terabytes of data in the land and atmosphere domains are available and can be visualized and exploited, keeping the

  11. Multi-sensor approach for a satellite detection and characterization of Mediterranean Hurricanes: a case study

    NASA Astrophysics Data System (ADS)

    Laviola, Sante; Valeri, Massimo; Marcello Miglietta, Mario; Levizzani, Vincenzo

    2014-05-01

    The extreme events on the Mediterranean basin are often associated to well-organized mesoscale systems, which usually develop over Northern Africa intensifying in presence of warm sea surface and cold air from the North. Although the synoptic conditions are often well known, the physical processes behind the genesis and development of a particular kind of these mesoscale systems called Medicane or Tropical-like Cyclone (TLC) is not well understood. A Medicane is a Mediterranean cyclogenesis with characteristics similar to those of the tropical cyclones such as spiral-like cloud bands and the presence of an "eye". The aim of this study is the improvement of the current knowledge on the Medicane structure using a satellite multi-sensor approach. Recent studies (Miglietta et al. 2013) based on the numerical model WRF demonstrate that a Medicane structure can be clearly identified by analyzing its thermal symmetry between 600 and 900 hPa: the presence of a warm core uniquely distinguishes between Mediterranean TLCs from baroclinic cyclones. The challenge of this study is the description of the physical structure of a Medicane only by using the satellite sensors. However, in the current version of the algorithm the wind field required to calculate the vorticity parameter is provided by the WRF model. The computational scheme of the algorithm quantifies the external features and the inner properties of a possible TLC: the geometrical symmetry often but not always spiral-shaped, type and altitude of clouds, and the distribution of precipitation patterns are significant elements to flag an intense Mediterranean cyclogenesis as Medicane. The method also takes into account the electrical activity of the storm in terms of number of strokes during the last 24 hours to refine the TLC identification. Keywords: Satellite, Microwave radiometry, Medicane, retrieval methods, Remote sensing Reference Miglietta, M. M., S. Laviola, A, Malvaldi, D. Conte, V. Levizzani, and C. Price

  12. Fast contactless vibrating structure characterization using real time field programmable gate array-based digital signal processing: demonstrations with a passive wireless acoustic delay line probe and vision.

    PubMed

    Goavec-Mérou, G; Chrétien, N; Friedt, J-M; Sandoz, P; Martin, G; Lenczner, M; Ballandras, S

    2014-01-01

    Vibrating mechanical structure characterization is demonstrated using contactless techniques best suited for mobile and rotating equipments. Fast measurement rates are achieved using Field Programmable Gate Array (FPGA) devices as real-time digital signal processors. Two kinds of algorithms are implemented on FPGA and experimentally validated in the case of the vibrating tuning fork. A first application concerns in-plane displacement detection by vision with sampling rates above 10 kHz, thus reaching frequency ranges above the audio range. A second demonstration concerns pulsed-RADAR cooperative target phase detection and is applied to radiofrequency acoustic transducers used as passive wireless strain gauges. In this case, the 250 ksamples/s refresh rate achieved is only limited by the acoustic sensor design but not by the detection bandwidth. These realizations illustrate the efficiency, interest, and potentialities of FPGA-based real-time digital signal processing for the contactless interrogation of passive embedded probes with high refresh rates.

  13. Fast contactless vibrating structure characterization using real time field programmable gate array-based digital signal processing: Demonstrations with a passive wireless acoustic delay line probe and vision

    NASA Astrophysics Data System (ADS)

    Goavec-Mérou, G.; Chrétien, N.; Friedt, J.-M.; Sandoz, P.; Martin, G.; Lenczner, M.; Ballandras, S.

    2014-01-01

    Vibrating mechanical structure characterization is demonstrated using contactless techniques best suited for mobile and rotating equipments. Fast measurement rates are achieved using Field Programmable Gate Array (FPGA) devices as real-time digital signal processors. Two kinds of algorithms are implemented on FPGA and experimentally validated in the case of the vibrating tuning fork. A first application concerns in-plane displacement detection by vision with sampling rates above 10 kHz, thus reaching frequency ranges above the audio range. A second demonstration concerns pulsed-RADAR cooperative target phase detection and is applied to radiofrequency acoustic transducers used as passive wireless strain gauges. In this case, the 250 ksamples/s refresh rate achieved is only limited by the acoustic sensor design but not by the detection bandwidth. These realizations illustrate the efficiency, interest, and potentialities of FPGA-based real-time digital signal processing for the contactless interrogation of passive embedded probes with high refresh rates.

  14. Fabrication of long-focal-length plano-convex microlens array by combining the micro-milling and injection molding processes.

    PubMed

    Chen, Lei; Kirchberg, Stefan; Jiang, Bing-Yan; Xie, Lei; Jia, Yun-Long; Sun, Lei-Lei

    2014-11-01

    A uniform plano-convex spherical microlens array with a long focal length was fabricated by combining the micromilling and injection molding processes in this work. This paper presents a quantitative study of the injection molding process parameters on the uniformity of the height of the microlenses. The variation of the injection process parameters, i.e., barrel temperature, mold temperature, injection speed, and packing pressure, was found to have a significant effect on the uniformity of the height of the microlenses, especially the barrel temperature. The filling-to-packing switchover point is also critical to the uniformity of the height of the microlenses. The optimal uniformity was achieved when the polymer melts completely filled the mold cavity, or even a little excessively filled the cavity, during the filling stage. In addition, due to the filling resistance, the practical filling-to-packing switchover point can vary with the change of the filling processing conditions and lead to a non-negligible effect on the uniformity of the height of the microlenses. Furthermore, the effect of injection speed on the uniformity of the height of the microlenses was analyzed in detail. The results indicated that the effect of injection speed on the uniformity of the height of the microlenses is mainly attributed to the two functions of injection speed: transferring the filling-to-packing switchover point and affecting the distribution of residual flow stress in the polymer melt.

  15. Flat-plate solar array project: Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The engineering design, fabrication, assembly, operation, economic analysis, and process support research and development for an Experimental Process System Development Unit for producing semiconductor-grade silicon using the slane-to-silicon process are reported. The design activity was completed. About 95% of purchased equipment was received. The draft of the operations manual was about 50% complete and the design of the free-space system continued. The system using silicon power transfer, melting, and shotting on a psuedocontinuous basis was demonstrated.

  16. Flat-plate solar array project: Experimental process system development unit for producing semiconductor-grade silicon using the silane-to-silicon process

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The process technology for the manufacture of semiconductor-grade silicon in a large commercial plant by 1986, at a price less than $14 per kilogram of silicon based on 1975 dollars is discussed. The engineering design, installation, checkout, and operation of an Experimental Process System Development unit was discussed. Quality control of scaling-up the process and an economic analysis of product and production costs are discussed.

  17. Effect of thermal implying during ageing process of nanorods growth on the properties of zinc oxide nanorod arrays

    NASA Astrophysics Data System (ADS)

    Ismail, A. S.; Mamat, M. H.; Malek, M. F.; Abdullah, M. A. R.; Sin, M. D.; Rusop, M.

    2016-07-01

    Undoped and Sn-doped Zinc oxide (ZnO) nanostructures have been fabricated using a simple sol-gel immersion method at 95°C of growth temperature. Thermal sourced by hot plate stirrer was supplied to the solution during ageing process of nanorods growth. The results showed significant decrement in the quality of layer produced after the immersion process where the conductivity and porosity of the samples reduced significantly due to the thermal appliance. The structural properties of the samples have been characterized using field emission scanning electron microscopy (FESEM) electrical properties has been characterized using current voltage (I-V) measurement.

  18. Autonomous collection of dynamically-cued multi-sensor imagery

    NASA Astrophysics Data System (ADS)

    Daniel, Brian; Wilson, Michael L.; Edelberg, Jason; Jensen, Mark; Johnson, Troy; Anderson, Scott

    2011-05-01

    The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include: NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution, narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be presented.

  19. Multi-Sensor Data Fusion Using a Relevance Vector Machine Based on an Ant Colony for Gearbox Fault Detection

    PubMed Central

    Liu, Zhiwen; Guo, Wei; Tang, Zhangchun; Chen, Yongqiang

    2015-01-01

    Sensors play an important role in the modern manufacturing and industrial processes. Their reliability is vital to ensure reliable and accurate information for condition based maintenance. For the gearbox, the critical machine component in the rotating machinery, the vibration signals collected by sensors are usually noisy. At the same time, the fault detection results based on the vibration signals from a single sensor may be unreliable and unstable. To solve this problem, this paper proposes an intelligent multi-sensor data fusion method using the relevance vector machine (RVM) based on an ant colony optimization algorithm (ACO-RVM) for gearboxes’ fault detection. RVM is a sparse probability model based on support vector machine (SVM). RVM not only has higher detection accuracy, but also better real-time accuracy compared with SVM. The ACO algorithm is used to determine kernel parameters of RVM. Moreover, the ensemble empirical mode decomposition (EEMD) is applied to preprocess the raw vibration signals to eliminate the influence caused by noise and other unrelated signals. The distance evaluation technique (DET) is employed to select dominant features as input of the ACO-RVM, so that the redundancy and inference in a large amount of features can be removed. Two gearboxes are used to demonstrate the performance of the proposed method. The experimental results show that the ACO-RVM has higher fault detection accuracy than the RVM with normal the cross-validation (CV). PMID:26334280

  20. A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Prasad, Guru

    2004-04-01

    Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for aligning, associating and combining data and information in estimating and predicting the state of objects, their relationships, and characterizing situations and their significance. The combination of complex data sets and the need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism, information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel, cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast storage, retrieval and interactive visualization of the multiple data streams.