Science.gov

Sample records for adaptive array processing

  1. Photorefractive processing for large adaptive phased arrays.

    PubMed

    Weverka, R T; Wagner, K; Sarto, A

    1996-03-10

    An adaptive null-steering phased-array optical processor that utilizes a photorefractive crystal to time integrate the adaptive weights and null out correlated jammers is described. This is a beam-steering processor in which the temporal waveform of the desired signal is known but the look direction is not. The processor computes the angle(s) of arrival of the desired signal and steers the array to look in that direction while rotating the nulls of the antenna pattern toward any narrow-band jammers that may be present. We have experimentally demonstrated a simplified version of this adaptive phased-array-radar processor that nulls out the narrow-band jammers by using feedback-correlation detection. In this processor it is assumed that we know a priori only that the signal is broadband and the jammers are narrow band. These are examples of a class of optical processors that use the angular selectivity of volume holograms to form the nulls and look directions in an adaptive phased-array-radar pattern and thereby to harness the computational abilities of three-dimensional parallelism in the volume of photorefractive crystals. The development of this processing in volume holographic system has led to a new algorithm for phased-array-radar processing that uses fewer tapped-delay lines than does the classic time-domain beam former. The optical implementation of the new algorithm has the further advantage of utilization of a single photorefractive crystal to implement as many as a million adaptive weights, allowing the radar system to scale to large size with no increase in processing hardware.

  2. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  3. True-Time-Delay Adaptive Array Processing Using Photorefractive Crystals

    NASA Astrophysics Data System (ADS)

    Kriehn, G. R.; Wagner, K.

    Radio frequency (RF) signal processing has proven to be a fertile application area when using photorefractive-based, optical processing techniques. This is due to a photorefractive material's capability to record gratings and diffract off these gratings with optically modulated beams that contain a wide RF bandwidth, and include applications such as the bias-free time-integrating correlator [1], adaptive signal processing, and jammer excision, [2, 3, 4]. Photorefractive processing of signals from RF antenna arrays is especially appropriate because of the massive parallelism that is readily achievable in a photorefractive crystal (in which many resolvable beams can be incident on a single crystal simultaneously—each coming from an optical modulator driven by a separate RF antenna element), and because a number of approaches for adaptive array processing using photorefractive crystals have been successfully investigated [5, 6]. In these types of applications, the adaptive weight coefficients are represented by the amplitude and phase of the holographic gratings, and many millions of such adaptive weights can be multiplexed within the volume of a photorefractive crystal. RF modulated optical signals from each array element are diffracted from the adaptively recorded photorefractive gratings (which can be multiplexed either angularly or spatially), and are then coherently combined with the appropriate amplitude weights and phase shifts to effectively steer the angular receptivity pattern of the antenna array toward the desired arriving signal. Likewise, the antenna nulls can also be rotated toward unwanted narrowband jammers for extinction, thereby optimizing the signal-to-interference-plus-noise ratio.

  4. Efficient true-time-delay adaptive array processing

    NASA Astrophysics Data System (ADS)

    Wagner, Kelvin H.; Kraut, Shawn; Griffiths, Lloyd J.; Weaver, Samuel P.; Weverka, Robert T.; Sarto, Anthony W.

    1996-11-01

    We present a novel and efficient approach to true-time-delay (TTD) beamforming for large adaptive phased arrays with N elements, for application in radar, sonar, and communication. This broadband and efficient adaptive method for time-delay array processing algorithm decreases the number of tapped delay lines required for N-element arrays form N to only 2, producing an enormous savings in optical hardware, especially for large arrays. This new adaptive system provides the full NM degrees of freedom of a conventional N element time delay beamformer with M taps, each, enabling it to fully and optimally adapt to an arbitrary complex spatio-temporal signal environment that can contain broadband signals, noise, and narrowband and broadband jammers, all of which can arrive from arbitrary angles onto an arbitrarily shaped array. The photonic implementation of this algorithm uses index gratings produce in the volume of photorefractive crystals as the adaptive weights in a TTD beamforming network, 1 or 2 acousto-optic devices for signal injection, and 1 or 2 time-delay-and- integrate detectors for signal extraction. This approach achieves significant reduction in hardware complexity when compared to systems employing discrete RF hardware for the weights or when compared to alternative optical systems that typically use N channel acousto-optic deflectors.

  5. Adaptive beamforming for array signal processing in aeroacoustic measurements.

    PubMed

    Huang, Xun; Bai, Long; Vinogradov, Igor; Peers, Edward

    2012-03-01

    Phased microphone arrays have become an important tool in the localization of noise sources for aeroacoustic applications. In most practical aerospace cases the conventional beamforming algorithm of the delay-and-sum type has been adopted. Conventional beamforming cannot take advantage of knowledge of the noise field, and thus has poorer resolution in the presence of noise and interference. Adaptive beamforming has been used for more than three decades to address these issues and has already achieved various degrees of success in areas of communication and sonar. In this work an adaptive beamforming algorithm designed specifically for aeroacoustic applications is discussed and applied to practical experimental data. It shows that the adaptive beamforming method could save significant amounts of post-processing time for a deconvolution method. For example, the adaptive beamforming method is able to reduce the DAMAS computation time by at least 60% for the practical case considered in this work. Therefore, adaptive beamforming can be considered as a promising signal processing method for aeroacoustic measurements.

  6. Non-linear, adaptive array processing for acoustic interference suppression.

    PubMed

    Hoppe, Elizabeth; Roan, Michael

    2009-06-01

    A method is introduced where blind source separation of acoustical sources is combined with spatial processing to remove non-Gaussian, broadband interferers from space-time displays such as bearing track recorder displays. This differs from most standard techniques such as generalized sidelobe cancellers in that the separation of signals is not done spatially. The algorithm performance is compared to adaptive beamforming techniques such as minimum variance distortionless response beamforming. Simulations and experiments using two acoustic sources were used to verify the performance of the algorithm. Simulations were also used to determine the effectiveness of the algorithm under various signal to interference, signal to noise, and array geometry conditions. A voice activity detection algorithm was used to benchmark the performance of the source isolation.

  7. Optimized micromirror arrays for adaptive optics

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Comtois, John H.; Hetherington, Dale L.

    1999-01-01

    This paper describes the design, layout, fabrication, and surface characterization of highly optimized surface micromachined micromirror devices. Design considerations and fabrication capabilities are presented. These devices are fabricated in the state-of-the-art, four-level, planarized, ultra-low-stress polysilicon process available at Sandia National Laboratories known as the Sandia Ultra-planar Multi-level MEMS Technology (SUMMiT). This enabling process permits the development of micromirror devices with near-ideal characteristics that have previously been unrealizable in standard three-layer polysilicon processes. The reduced 1 μm minimum feature sizes and 0.1 μm mask resolution make it possible to produce dense wiring patterns and irregularly shaped flexures. Likewise, mirror surfaces can be uniquely distributed and segmented in advanced patterns and often irregular shapes in order to minimize wavefront error across the pupil. The ultra-low-stress polysilicon and planarized upper layer allow designers to make larger and more complex micromirrors of varying shape and surface area within an array while maintaining uniform performance of optical surfaces. Powerful layout functions of the AutoCAD editor simplify the design of advanced micromirror arrays and make it possible to optimize devices according to the capabilities of the fabrication process. Micromirrors fabricated in this process have demonstrated a surface variance across the array from only 2-3 nm to a worst case of roughly 25 nm while boasting active surface areas of 98% or better. Combining the process planarization with a ``planarized-by-design'' approach will produce micromirror array surfaces that are limited in flatness only by the surface deposition roughness of the structural material. Ultimately, the combination of advanced process and layout capabilities have permitted the fabrication of highly optimized micromirror arrays for adaptive optics.

  8. Array signal processing

    SciTech Connect

    Haykin, S.; Justice, J.H.; Owsley, N.L.; Yen, J.L.; Kak, A.C.

    1985-01-01

    This is the first book to be devoted completely to array signal processing, a subject that has become increasingly important in recent years. The book consists of six chapters. Chapter 1, which is introductory, reviews some basic concepts in wave propagation. The remaining five chapters deal with the theory and applications of array signal processing in (a) exploration seismology, (b) passive sonar, (c) radar, (d) radio astronomy, and (e) tomographic imaging. The various chapters of the book are self-contained. The book is written by a team of five active researchers, who are specialists in the individual fields covered by the pertinent chapters.

  9. Adaptive passive fathometer processing.

    PubMed

    Siderius, Martin; Song, Heechun; Gerstoft, Peter; Hodgkiss, William S; Hursky, Paul; Harrison, Chris

    2010-04-01

    Recently, a technique has been developed to image seabed layers using the ocean ambient noise field as the sound source. This so called passive fathometer technique exploits the naturally occurring acoustic sounds generated on the sea-surface, primarily from breaking waves. The method is based on the cross-correlation of noise from the ocean surface with its echo from the seabed, which recovers travel times to significant seabed reflectors. To limit averaging time and make this practical, beamforming is used with a vertical array of hydrophones to reduce interference from horizontally propagating noise. The initial development used conventional beamforming, but significant improvements have been realized using adaptive techniques. In this paper, adaptive methods for this process are described and applied to several data sets to demonstrate improvements possible as compared to conventional processing.

  10. A unified systolic array for adaptive beamforming

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. )

    1990-04-01

    The authors present a new algorithm and systolic array for adaptive beamforming. The authors algorithm uses only orthogonal transformations and thus should have better numerical properties. The algorithm can be implemented on one single p {times} p triangular array of programmable processors that offers a throughput of one residual element per cycle.

  11. Research on algorithms for adaptive antenna arrays

    NASA Astrophysics Data System (ADS)

    Widrow, B.; Newman, W.; Gooch, R.; Duvall, K.; Shur, D.

    1981-08-01

    The fundamental efficiency of adaptive algorithms is analyzed. It is found that noise in the adaptive weights increases with convergence speed. This causes loss in mean-square-error performance. Efficiency is considered from the point of view of misadjustment versus speed of convergence. A new version of the LMS algorithm based on Newton's method is analyzed and shown to make maximally efficient use of real-time input data. The performance of this algorithm is not affected by eigenvalue disparity. Practical algorithms can be devised that closely approximate Newton's method. In certain cases, the steepest descent version of LMS performs as well as Newton's method. The efficiency of adaptive algorithms with nonstationary input environments is analyzed where signals, jammers, and background noises can be of a transient and nonstationary nature. A new adaptive filtering method for broadband adaptive beamforming is described which uses both poles and zeros in the adaptive signal filtering paths from the antenna elements to the final array output.

  12. Adaptive array antenna for satellite cellular and direct broadcast communications

    NASA Technical Reports Server (NTRS)

    Horton, Charles R.; Abend, Kenneth

    1993-01-01

    Adaptive phased-array antennas provide cost-effective implementation of large, light weight apertures with high directivity and precise beamshape control. Adaptive self-calibration allows for relaxation of all mechanical tolerances across the aperture and electrical component tolerances, providing high performance with a low-cost, lightweight array, even in the presence of large physical distortions. Beam-shape is programmable and adaptable to changes in technical and operational requirements. Adaptive digital beam-forming eliminates uplink contention by allowing a single electronically steerable antenna to service a large number of receivers with beams which adaptively focus on one source while eliminating interference from others. A large, adaptively calibrated and fully programmable aperture can also provide precise beam shape control for power-efficient direct broadcast from space. Advanced adaptive digital beamforming technologies are described for: (1) electronic compensation of aperture distortion, (2) multiple receiver adaptive space-time processing, and (3) downlink beam-shape control. Cost considerations for space-based array applications are also discussed.

  13. Adaptive antenna arrays for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.

    1985-01-01

    The interference protection provided by adaptive antenna arrays to an Earth station or satellite receive antenna system is studied. The case where the interference is caused by the transmission from adjacent satellites or Earth stations whose signals inadverently enter the receiving system and interfere with the communication link is considered. Thus, the interfering signals are very weak. To increase the interference suppression, one can either decrease the thermal noise in the feedback loops or increase the gain of the auxiliary antennas in the interfering signal direction. Both methods are examined. It is shown that one may have to reduce the noise correlation to impractically low values and if directive auxiliary antennas are used, the auxiliary antenna size may have to be too large. One can, however, combine the two methods to achieve the specified interference suppression with reasonable requirements of noise decorrelation and auxiliary antenna size. Effects of the errors in the steering vector on the adaptive array performance are studied.

  14. Adaptive antenna arrays for satellite communication

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1989-01-01

    The feasibility of using adaptive antenna arrays to provide interference protection in satellite communications was studied. The feedback loops as well as the sample matric inversion (SMI) algorithm for weight control were studied. Appropriate modifications in the two were made to achieve the required interference suppression. An experimental system was built to test the modified feedback loops and the modified SMI algorithm. The performance of the experimental system was evaluated using bench generated signals and signals received from TVRO geosynchronous satellites. A summary of results is given. Some suggestions for future work are also presented.

  15. Research in large adaptive antenna arrays

    NASA Technical Reports Server (NTRS)

    Berkowitz, R. S.; Dzekov, T.

    1976-01-01

    The feasibility of microwave holographic imaging of targets near the earth using a large random conformal array on the earth's surface and illumination by a CW source on a geostationary satellite is investigated. A geometrical formulation for the illuminator-target-array relationship is applied to the calculation of signal levels resulting from L-band illumination supplied by a satellite similar to ATS-6. The relations between direct and reflected signals are analyzed and the composite resultant signal seen at each antenna element is described. Processing techniques for developing directional beam formation as well as SNR enhancement are developed. The angular resolution and focusing characteristics of a large array covering an approximately circular area on the ground are determined. The necessary relations are developed between the achievable SNR and the size and number of elements in the array. Numerical results are presented for possible air traffic surveillance system. Finally, a simple phase correlation experiment is defined that can establish how large an array may be constructed.

  16. The CHARA Array Adaptive Optics Program

    NASA Astrophysics Data System (ADS)

    Ten Brummelaar, Theo; Che, Xiao; McAlister, Harold A.; Ireland, Michael; Monnier, John D.; Mourard, Denis; Ridgway, Stephen T.; sturmann, judit; Sturmann, Laszlo; Turner, Nils H.; Tuthill, Peter

    2016-01-01

    The CHARA array is an optical/near infrared interferometer consisting of six 1-meter diameter telescopes the longest baseline of which is 331 meters. With sub-millisecond angular resolution, the CHARA array is able to spatially resolve nearby stellar systems to reveal the detailed structures. To improve the sensitivity and scientific throughput, the CHARA array was funded by NSF-ATI in 2011, and by NSF-MRI in 2015, for an upgrade of adaptive optics (AO) systems to all six telescopes. The initial grant covers Phase I of the adaptive optics system, which includes an on-telescope Wavefront Sensor and non-common-path (NCP) error correction. The WFS use a fairly standard Shack-Hartman design and will initially close the tip tilt servo and log wavefront errors for use in data reduction and calibration. The second grant provides the funding for deformable mirrors for each telescope which will be used closed loop to remove atmospheric aberrations from the beams. There are then over twenty reflections after the WFS at the telescopes that bring the light several hundred meters into the beam combining laboratory. Some of these, including the delay line and beam reducing optics, are powered elements, and some of them, in particular the delay lines and telescope Coude optics, are continually moving. This means that the NCP problems in an interferometer are much greater than those found in more standard telescope systems. A second, slow AO system is required in the laboratory to correct for these NCP errors. We will breifly describe the AO system, and it's current status, as well as discuss the new science enabled by the system with a focus on our YSO program.

  17. Adaptive and mobile ground sensor array.

    SciTech Connect

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer; Maish, Alexander B.

    2003-12-01

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.

  18. Adaptive Detector Arrays for Optical Communications Receivers

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Srinivasan, M.

    2000-01-01

    The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.

  19. Adaptive Signal Processing Testbed

    NASA Astrophysics Data System (ADS)

    Parliament, Hugh A.

    1991-09-01

    The design and implementation of a system for the acquisition, processing, and analysis of signal data is described. The initial application for the system is the development and analysis of algorithms for excision of interfering tones from direct sequence spread spectrum communication systems. The system is called the Adaptive Signal Processing Testbed (ASPT) and is an integrated hardware and software system built around the TMS320C30 chip. The hardware consists of a radio frequency data source, digital receiver, and an adaptive signal processor implemented on a Sun workstation. The software components of the ASPT consists of a number of packages including the Sun driver package; UNIX programs that support software development on the TMS320C30 boards; UNIX programs that provide the control, user interaction, and display capabilities for the data acquisition, processing, and analysis components of the ASPT; and programs that perform the ASPT functions including data acquisition, despreading, and adaptive filtering. The performance of the ASPT system is evaluated by comparing actual data rates against their desired values. A number of system limitations are identified and recommendations are made for improvements.

  20. Proposed MIDAS II processing array

    SciTech Connect

    Meng, J.

    1982-03-01

    MIDAS (Modular Interactive Data Analysis System) is a ganged processor scheme used to interactively process large data bases occurring as a finite sequence of similar events. The existing device uses a system of eight ganged minicomputer central processor boards servicing a rotating group of 16 memory blocks. A proposal for MIDAS II, the successor to MIDAS, is to use a much larger number of ganged processors, one per memory block, avoiding the necessity of switching memories from processor to processor. To be economic, MIDAS II must use a small, relatively fast and inexpensive microprocessor, such as the TMS 9995. This paper analyzes the use of the TMS 9995 applied to the MIDAS II processing array, emphasizing computational, architectural and physical characteristics which make the use of the TMS 9995 attractive for this application.

  1. Optimizing Satellite Communications With Adaptive and Phased Array Antennas

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Popovic, Zoya; Langley, John; Barott, William C.; Ahmed, M. Usman; Mandl, Dan

    2004-01-01

    A new adaptive antenna array architecture for low-earth-orbiting satellite ground stations is being investigated. These ground stations are intended to have no moving parts and could potentially be operated in populated areas, where terrestrial interference is likely. The architecture includes multiple, moderately directive phased arrays. The phased arrays, each steered in the approximate direction of the satellite, are adaptively combined to enhance the Signal-to-Noise and Interference-Ratio (SNIR) of the desired satellite. The size of each phased array is to be traded-off with the number of phased arrays, to optimize cost, while meeting a bit-error-rate threshold. Also, two phased array architectures are being prototyped: a spacefed lens array and a reflect-array. If two co-channel satellites are in the field of view of the phased arrays, then multi-user detection techniques may enable simultaneous demodulation of the satellite signals, also known as Space Division Multiple Access (SDMA). We report on Phase I of the project, in which fixed directional elements are adaptively combined in a prototype to demodulate the S-band downlink of the EO-1 satellite, which is part of the New Millennium Program at NASA.

  2. Adaptive processing for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Crane, R. B.; Reyer, J. F.

    1975-01-01

    Analytical and test results on the use of adaptive processing on LANDSAT data are presented. The Kalman filter was used as a framework to contain different adapting techniques. When LANDSAT MSS data were used all of the modifications made to the Kalman filter performed the functions for which they were designed. It was found that adaptive processing could provide compensation for incorrect signature means, within limits. However, if the data were such that poor classification accuracy would be obtained when the correct means were used, then adaptive processing would not improve the accuracy and might well lower it even further.

  3. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  4. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  5. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  6. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    SciTech Connect

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  7. NASA Adaptive Multibeam Phased Array (AMPA): An application study

    NASA Technical Reports Server (NTRS)

    Mittra, R.; Lee, S. W.; Gee, W.

    1982-01-01

    The proposed orbital geometry for the adaptive multibeam phased array (AMPA) communication system is reviewed and some of the system's capabilities and preliminary specifications are highlighted. Typical AMPA user link models and calculations are presented, the principal AMPA features are described, and the implementation of the system is demonstrated. System tradeoffs and requirements are discussed. Recommendations are included.

  8. Techniques for radar imaging using a wideband adaptive array

    NASA Astrophysics Data System (ADS)

    Curry, Mark Andrew

    A microwave imaging approach is simulated and validated experimentally that uses a small, wideband adaptive array. The experimental 12-element linear array and microwave receiver uses stepped frequency CW signals from 2--3 GHz and receives backscattered energy from short range objects in a +/-90° field of view. Discone antenna elements are used due to their wide temporal bandwidth, isotropic azimuth beam pattern and fixed phase center. It is also shown that these antennas have very low mutual coupling, which significantly reduces the calibration requirements. The MUSIC spectrum is used as a calibration tool. Spatial resampling is used to correct the dispersion effects, which if not compensated causes severe reduction in detection and resolution for medium and large off-axis angles. Fourier processing provides range resolution and the minimum variance spectral estimate is employed to resolve constant range targets for improved angular resolution. Spatial smoothing techniques are used to generate signal plus interference covariance matrices at each range bin. Clutter affects the angular resolution of the array due to the increase in rank of the signal plus clutter covariance matrix, whereas at the same time the rank of this matrix is reduced for closely spaced scatterers due to signal coherence. A method is proposed to enhance angular resolution in the presence of clutter by an approximate signal subspace projection (ASSP) that maps the received signal space to a lower effective rank approximation. This projection operator has a scalar control parameter that is a function of the signal and clutter amplitude estimates. These operations are accomplished without using eigendecomposition. The low sidelobe levels allow the imaging of the integrated backscattering from the absorber cones in the chamber. This creates a fairly large clutter signature for testing ASSP. We can easily resolve 2 dihedrals placed at about 70% of a beamwidth apart, with a signal to clutter ratio

  9. Evolutionary Adaptive Discovery of Phased Array Sensor Signal Identification

    SciTech Connect

    Timothy R. McJunkin; Milos Manic

    2011-05-01

    Tomography, used to create images of the internal properties and features of an object, from phased array ultasonics is improved through many sophisiticated methonds of post processing of data. One approach used to improve tomographic results is to prescribe the collection of more data, from different points of few so that data fusion might have a richer data set to work from. This approach can lead to rapid increase in the data needed to be stored and processed. It also does not necessarily lead to have the needed data. This article describes a novel approach to utilizing the data aquired as a basis for adapting the sensors focusing parameters to locate more precisely the features in the material: specifically, two evolutionary methods of autofocusing on a returned signal are coupled with the derivations of the forumulas for spatially locating the feature are given. Test results of the two novel methods of evolutionary based focusing (EBF) illustrate the improved signal strength and correction of the position of feature using the optimized focal timing parameters, called Focused Delay Identification (FoDI).

  10. Study of large adaptive arrays for space technology applications

    NASA Technical Reports Server (NTRS)

    Berkowitz, R. S.; Steinberg, B.; Powers, E.; Lim, T.

    1977-01-01

    The research in large adaptive antenna arrays for space technology applications is reported. Specifically two tasks were considered. The first was a system design study for accurate determination of the positions and the frequencies of sources radiating from the earth's surface that could be used for the rapid location of people or vehicles in distress. This system design study led to a nonrigid array about 8 km in size with means for locating the array element positions, receiving signals from the earth and determining the source locations and frequencies of the transmitting sources. It is concluded that this system design is feasible, and satisfies the desired objectives. The second task was an experiment to determine the largest earthbound array which could simulate a spaceborne experiment. It was determined that an 800 ft array would perform indistinguishably in both locations and it is estimated that one several times larger also would serve satisfactorily. In addition the power density spectrum of the phase difference fluctuations across a large array was measured. It was found that the spectrum falls off approximately as f to the minus 5/2 power.

  11. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  12. A recurrent neural network for adaptive beamforming and array correction.

    PubMed

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints.

  13. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    SciTech Connect

    Leung, Daniel

    2011-04-19

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  14. Adaptive multibeam phased array design for a Spacelab experiment

    NASA Technical Reports Server (NTRS)

    Noji, T. T.; Fass, S.; Fuoco, A. M.; Wang, C. D.

    1977-01-01

    The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio.

  15. An adaptive array antenna for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Milne, Robert

    1990-01-01

    The design of an adaptive array antenna for land vehicle operation and its performance in an operational satellite system is described. Linear and circularly polarized antenna designs are presented. The acquisition and tracking operation of a satellite is described and the effect on the communications signal is discussed. A number of system requirements are examined that have a major impact on the antenna design. The results of environmental, power handling, and RFI testing are presented and potential problems are identified.

  16. Regularized estimate of the weight vector of an adaptive antenna array

    NASA Astrophysics Data System (ADS)

    Ermolayev, V. T.; Flaksman, A. G.; Sorokin, I. S.

    2013-02-01

    We consider an adaptive antenna array (AAA) with the maximum signal-to-noise ratio (SNR) at the output. The antenna configuration is assumed to be arbitrary. A rigorous analytical solution for the optimal weight vector of the AAA is obtained if the input process is defined by the noise correlation matrix and the useful-signal vector. On the basis of this solution, the regularized estimate of the weight vector is derived by using a limited number of input noise samples, which can be either greater or smaller than the number of array elements. Computer simulation results of adaptive signal processing indicate small losses in the SNR compared with the optimal SNR value. It is shown that the computing complexity of the proposed estimate is proportional to the number of noise samples, the number of external noise sources, and the squared number of array elements.

  17. Adaptive sensor array algorithm for structural health monitoring of helmet

    NASA Astrophysics Data System (ADS)

    Zou, Xiaotian; Tian, Ye; Wu, Nan; Sun, Kai; Wang, Xingwei

    2011-04-01

    The adaptive neural network is a standard technique used in nonlinear system estimation and learning applications for dynamic models. In this paper, we introduced an adaptive sensor fusion algorithm for a helmet structure health monitoring system. The helmet structure health monitoring system is used to study the effects of ballistic/blast events on the helmet and human skull. Installed inside the helmet system, there is an optical fiber pressure sensors array. After implementing the adaptive estimation algorithm into helmet system, a dynamic model for the sensor array has been developed. The dynamic response characteristics of the sensor network are estimated from the pressure data by applying an adaptive control algorithm using artificial neural network. With the estimated parameters and position data from the dynamic model, the pressure distribution of the whole helmet can be calculated following the Bazier Surface interpolation method. The distribution pattern inside the helmet will be very helpful for improving helmet design to provide better protection to soldiers from head injuries.

  18. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, Param H.; Hunt, Arlon J.

    1986-01-01

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO.sub.2, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO.sub.2, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40.degree. C. instead of at about 270.degree. C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementry particles or cosmic rays.

  19. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, P.H.; Hunt, A.J.

    1985-09-04

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO/sub 2/, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO/sub 2/, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40/sup 0/C instead of at about 270/sup 0/C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementary particles or cosmic rays.

  20. Array Signal Processing for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Veen, Alle Jan; Leshem, Amir; Boonstra, Albert Jan

    2004-06-01

    Radio astronomy forms an interesting application area for array signal processing techniques. Current synthesis imaging telescopes consist of a small number of identical dishes, which track a fixed patch in the sky and produce estimates of the time-varying spatial covariance matrix. The observations sometimes are distorted by interference, e.g., from radio, TV, radar or satellite transmissions. We describe some of the tools that array signal processing offers to filter out the interference, based on eigenvalue decompositions and factor analysis, which is a more general technique applicable to partially calibrated arrays. We consider detection of interference, spatial filtering techniques using projections, and discuss how a reference antenna pointed at the interferer can improve the performance. We also consider image formation and its relation to beamforming.

  1. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  2. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  3. Adaptive array for weak interfering signals: Geostationary satellite experiments

    NASA Astrophysics Data System (ADS)

    Steadman, Karl

    The performance of an experimental adaptive array is evaluated using signals from an existing geostationary satellite interference environment. To do this, an earth station antenna was built to receive signals from various geostationary satellites. In these experiments the received signals have a frequency of approximately 4 GHz (C-band) and have a bandwidth of over 35 MHz. These signals are downconverted to a 69 MHz intermediate frequency in the experimental system. Using the downconverted signals, the performance of the experimental system for various signal scenarios is evaluated. In this situation, due to the inherent thermal noise, qualitative instead of quantitative test results are presented. It is shown that the experimental system can null up to two interfering signals well below the noise level. However, to avoid the cancellation of the desired signal, the use a steering vector is needed. Various methods to obtain an estimate of the steering vector are proposed.

  4. Cylindrical Antenna With Partly Adaptive Phased-Array Feed

    NASA Technical Reports Server (NTRS)

    Hussein, Ziad; Hilland, Jeff

    2003-01-01

    A proposed design for a phased-array fed cylindrical-reflector microwave antenna would enable enhancement of the radiation pattern through partially adaptive amplitude and phase control of its edge radiating feed elements. Antennas based on this design concept would be attractive for use in radar (especially synthetic-aperture radar) and other systems that could exploit electronic directional scanning and in which there are requirements for specially shaped radiation patterns, including ones with low side lobes. One notable advantage of this design concept is that the transmitter/ receiver modules feeding all the elements except the edge ones could be identical and, as a result, the antenna would cost less than in the cases of prior design concepts in which these elements may not be identical.

  5. Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array

    PubMed Central

    Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J.; Urbas, Augustine

    2016-01-01

    In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed “algorithmic spectrometry”. We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme. PMID:27721506

  6. Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array

    NASA Astrophysics Data System (ADS)

    Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J.; Urbas, Augustine

    2016-10-01

    In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed “algorithmic spectrometry”. We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme.

  7. Image processing on MPP-like arrays

    SciTech Connect

    Coletti, N.B.

    1983-01-01

    The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.

  8. Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    SciTech Connect

    Ringdal, F; Harris, D B; Dodge, D; Gibbons, S J

    2009-07-23

    extend detection to lower magnitudes. This year we addressed a problem long known to limit the acceptance of correlation detectors in practice: the labor intensive development of templates. For example, existing design methods cannot keep pace with rapidly unfolding aftershock sequences. We successfully built and tested an object-oriented framework (as described in our 2005 proposal) for autonomous calibration of waveform correlation detectors for an array. The framework contains a dynamic list of detectors of several types operating on a continuous array data stream. The list has permanent detectors: beam forming power (STA/LTA) detectors which serve the purpose of detecting signals not yet characterized with a waveform template. The framework also contains an arbitrary number of subspace detectors which are launched automatically using the waveforms from validated power detections as templates. The implementation is very efficient such that the computational cost of adding subspace detectors was low. The framework contains a supervisor that oversees the validation of power detections, and periodically halts the processing to revise the portfolio of detectors. The process of revision consists of collecting the waveforms from all detections, performing cross-correlations pairwise among all waveforms, clustering the detections using correlations as a distance measure, then creating a new subspace detector from each cluster. The collection of new subspace detectors replaces the existing portfolio and processing of the data stream resumes. This elaborate scheme was implemented to prevent proliferation of closely-related subspace detectors. The method performed very well on several simple sequences: 2005 'drumbeat' events observed locally at Mt. St. Helens, and the 2003 Orinda, CA aftershock sequence. Our principal test entailed detection of the aftershocks of the San Simeon earthquake using the NVAR array; in this case, the system automatically detected and categorized

  9. Gallium arsenide processing for gate array logic

    NASA Technical Reports Server (NTRS)

    Cole, Eric D.

    1989-01-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  10. Superresolution with seismic arrays using empirical matched field processing

    NASA Astrophysics Data System (ADS)

    Harris, David B.; Kvaerna, Tormod

    2010-09-01

    Scattering and refraction of seismic waves can be exploited with empirical-matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term `superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2 per cent accuracy of 549 explosions conducted by closely spaced mines in northwest Russia. The mines are observed at 340-410 km range and are separated by as little as 3 km. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber (FK) and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane-wave model.

  11. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    SciTech Connect

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  12. Color filter array demosaicing: an adaptive progressive interpolation based on the edge type

    NASA Astrophysics Data System (ADS)

    Dong, Qiqi; Liu, Zhaohui

    2015-10-01

    Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.

  13. Array signal processing in the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  14. Adaptive ground implemented phased array. [evaluation to overcome radio frequency interference characteristics of TDRS VHF return link

    NASA Technical Reports Server (NTRS)

    Smith, J. M.

    1973-01-01

    Tests were conducted to determine the feasibility of using an adaptive ground implemented phased array (AGIPA) to overcome the limitations of the radio frequency interference limited low data Tracking and Data Relay Satellite VHF return link. A feasibility demonstration model of a single user channel AFIPA system was designed, developed, fabricated, and evaluated. By scaling the frequency and aperture geometry from VHF to S-band, the system performance was more easily demonstrated in the controlled environment of an anechoic chamber. The testing procedure employs an AGIPA in which received signals from each element of the array are processed on the ground to form an adaptive, independent, computer controlled beam for each user.

  15. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  16. An adaptive signal-processing approach to online adaptive tutoring.

    PubMed

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  17. An experimental SMI adaptive antenna array simulator for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald S.; Gupta, Inder J.

    1991-01-01

    An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.

  18. Photoreceptor processes in visual adaptation.

    PubMed

    Ripps, H; Pepperberg, D R

    1987-01-01

    In this paper we have stressed two experimental results in need of explanation: (i) the reduced efficacy with which (remaining, abundant) rhodopsin in the light-adapted receptor mediates the flash response; and (ii) the disparity in conditions of irradiation (weak background vs. extensive bleaching) leading to equivalent conditions of threshold. The model discussed above suggests, in molecular terms, a possible basis for both properties of receptor adaptation. On the view developed here, property (i) derives from the ability of photoactivated or bleached pigment (R or B) to restrict dramatically the availability of a substance required for phototransduction. Property (ii) derives in large part from the pronounced disparity in the effectiveness of R (during illumination) and B (remaining after illumination) in reducing the availability of this substance. On this view, the "equivalence" of threshold elevation in states of light- vs. dark-adaptation derives from an overall equality of a product of factors (Q, Etot/Es, and J of equation 2). Under all but extreme conditions, this aggregate of factors is dominated by the term Etot/Es, reflecting the functional state of E. PMID:3317149

  19. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  20. Adaptive-array Electron Cyclotron Emission diagnostics using data streaming in a Software Defined Radio system

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Hamasaki, M.; Fujisawa, A.; Nagashima, Y.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; the QUEST Team

    2016-04-01

    Measurement of the Electron Cyclotron Emission (ECE) spectrum is one of the most popular electron temperature diagnostics in nuclear fusion plasma research. A 2-dimensional ECE imaging system was developed with an adaptive-array approach. A radio-frequency (RF) heterodyne detection system with Software Defined Radio (SDR) devices and a phased-array receiver antenna was used to measure the phase and amplitude of the ECE wave. The SDR heterodyne system could continuously measure the phase and amplitude with sufficient accuracy and time resolution while the previous digitizer system could only acquire data at specific times. Robust streaming phase measurements for adaptive-arrayed continuous ECE diagnostics were demonstrated using Fast Fourier Transform (FFT) analysis with the SDR system. The emission field pattern was reconstructed using adaptive-array analysis. The reconstructed profiles were discussed using profiles calculated from coherent single-frequency radiation from the phase array antenna.

  1. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  2. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  3. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  4. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-03-11

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  5. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    PubMed Central

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  6. Implementation and use of systolic array processes

    SciTech Connect

    Kung, H.T.

    1983-01-01

    Major effort are now underway to use systolic array processors in large, real-life applications. The author examines various implementation issues and alternatives, the latter from the viewpoints of flexibility and interconnection topologies. He then identifies some work that is essential to the eventual wide use of systolic array processors, such as the development of building blocks, system support and suitable algorithms. 24 references.

  7. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    PubMed

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  8. Array Processing in the Cloud: the rasdaman Approach

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Dumitru, Alex

    2015-04-01

    The multi-dimensional array data model is gaining more and more attention when dealing with Big Data challenges in a variety of domains such as climate simulations, geographic information systems, medical imaging or astronomical observations. Solutions provided by classical Big Data tools such as Key-Value Stores and MapReduce, as well as traditional relational databases, proved to be limited in domains associated with multi-dimensional data. This problem has been addressed by the field of array databases, in which systems provide database services for raster data, without imposing limitations on the number of dimensions that a dataset can have. Examples of datasets commonly handled by array databases include 1-dimensional sensor data, 2-D satellite imagery, 3-D x/y/t image time series as well as x/y/z geophysical voxel data, and 4-D x/y/z/t weather data. And this can grow as large as simulations of the whole universe when it comes to astrophysics. rasdaman is a well established array database, which implements many optimizations for dealing with large data volumes and operation complexity. Among those, the latest one is intra-query parallelization support: a network of machines collaborate for answering a single array database query, by dividing it into independent sub-queries sent to different servers. This enables massive processing speed-ups, which promise solutions to research challenges on multi-Petabyte data cubes. There are several correlated factors which influence the speedup that intra-query parallelisation brings: the number of servers, the capabilities of each server, the quality of the network, the availability of the data to the server that needs it in order to compute the result and many more. In the effort of adapting the engine to cloud processing patterns, two main components have been identified: one that handles communication and gathers information about the arrays sitting on every server, and a processing unit responsible with dividing work

  9. MSAT-X phased array antenna adaptions to airborne applications

    NASA Technical Reports Server (NTRS)

    Sparks, C.; Chung, H. H.; Peng, S. Y.

    1988-01-01

    The Mobile Satellite Experiment (MSAT-X) phased array antenna is being modified to meet future requirements. The proposed system consists of two high gain antennas mounted on each side of a fuselage, and a low gain antenna mounted on top of the fuselage. Each antenna is an electronically steered phased array based on the design of the MSAT-X antenna. A beamforming network is connected to the array elements via coaxial cables. It is essential that the proposed antenna system be able to provide an adequate communication link over the required space coverage, which is 360 degrees in azimuth and from 20 degrees below the horizon to the zenith in elevation. Alternative design concepts are suggested. Both open loop and closed loop backup capabilities are discussed. Typical antenna performance data are also included.

  10. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  11. Simulation and Data Processing for Ultrasonic Phased-Arrays Applications

    NASA Astrophysics Data System (ADS)

    Chaffaï-Gargouri, S.; Chatillon, S.; Mahaut, S.; Le Ber, L.

    2007-03-01

    The use of phased-arrays techniques has considerably contributed to extend the domain of applications and the performances of ultrasonic methods on complex configurations. Their adaptability offers a great freedom for conceiving the inspection leading to a wide range of functionalities gathering electronic commutation, applications of different delay laws and so on. This advantage allows to circumvent the difficulties encountered with more classical techniques especially when the inspection is assisted by simulation at the different stages : probe design (optimization of the number and characteristics of the elements), evaluation of the performances in terms of flaw detection (zone coverage) and characterization, driving the array (computation of adapted delay laws) and finally analyzing the results (versatile model-based imaging tools allowing in particular to locate the data in the real space). The CEA is strongly involved in the development of efficient simulation-based tools adapted to these needs. In this communication we present the recent advances done at CEA in this field and show several examples of complex NDT phased arrays applications. On these cases we show the interest and the performances of simulation-helped array design, array-driving and data analysis.

  12. Low-latency adaptive optics system processing electronics

    NASA Astrophysics Data System (ADS)

    Duncan, Terry S.; Voas, Joshua K.; Eager, Robert J.; Newey, Scott C.; Wynia, John L.

    2003-02-01

    Extensive system modeling and analysis clearly shows that system latency is a primary performance driver in closed loop adaptive optical systems. With careful attention to all sensing, processing, and controlling components, system latency can be significantly reduced. Upgrades to the Starfire Optical Range (SOR) 3.5-meter telescope facility adaptive optical system have resulted in a reduction in overall latency from 660 μsec to 297 μsec. Future efforts will reduce the system latency even more to the 170 msec range. The changes improve system bandwidth significantly by reducing the "age" of the correction that is applied to the deformable mirror. Latency reductions have been achieved by increasing the pixel readout pattern and rate on the wavefront sensor, utilizing a new high-speed field programmable gate array (FPGA) based wavefront processor, doubling the processing rate of the real-time reconstructor, and streamlining the operation of the deformable mirror drivers.

  13. Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays

    NASA Astrophysics Data System (ADS)

    Torres, Sergio N.; Vera, Esteban M.; Reeves, Rodrigo A.; Sobarzo, Sergio K.

    2003-08-01

    The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise. In this paper we present an enhanced adaptive scene-based non-uniformity correction (NUC) technique. The method simultaneously estimates detector's parameters and performs the non-uniformity compensation using a neural network approach. In addition, the proposed method doesn't make any assumption on the kind or amount of non-uniformity presented on the raw data. The strength and robustness of the proposed method relies in avoiding the presence of ghosting artifacts through the use of optimization techniques in the parameter estimation learning process, such as: momentum, regularization, and adaptive learning rate. The proposed method has been tested with video sequences of simulated and real infrared data taken with an InSb IRFPA, reaching high correction levels, reducing the fixed pattern noise, decreasing the ghosting, and obtaining an effective frame by frame adaptive estimation of each detector's gain and offset.

  14. Adaptive array technique for differential-phase reflectometry in QUEST

    SciTech Connect

    Idei, H. Hanada, K.; Zushi, H.; Nagata, K.; Mishra, K.; Itado, T.; Akimoto, R.; Yamamoto, M. K.

    2014-11-15

    A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effect was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect.

  15. An experimental SMI adaptive antenna array for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Gupta, I. J.

    1989-01-01

    A modified sample matrix inversion (SMI) algorithm designed to increase the suppression of weak interference is implemented on an existing experimental array system. The algorithm itself is fully described as are a number of issues concerning its implementation and evaluation, such as sample scaling, snapshot formation, weight normalization, power calculation, and system calibration. Several experiments show that the steady state performance (i.e., many snapshots are used to calculate the array weights) of the experimental system compares favorably with its theoretical performance. It is demonstrated that standard SMI does not yield adequate suppression of weak interference. Modified SMI is then used to experimentally increase this suppression by as much as 13dB.

  16. Regular Arrays of QDs by Solution Processing

    NASA Astrophysics Data System (ADS)

    Oliva, Brittany L.

    2011-12-01

    Hydrophilic silicon and germanium quantum dots were synthesized by a "bottom-up" method utilizing micelles to control particle size. Liquid phase deposition of silica on these quantum dots was successful with and without DTAB (dodecyltrimethylammonium bromide) as a surfactant to yield uniform spheres. Coating the quantum dots in the presence of DTAB allowed for better size control. The silica coated quantum dots were then arrayed in three dimensions using a vertical deposition technique on quartz slides or ITO glass. UV-vis absorbance, AFM, SEM, and TEM images were used to analyze the particles at every stage. The photoconductivity of the arrays was tested, and the cells were found to be conductive in areas.

  17. The application of systolic arrays to radar signal processing

    NASA Astrophysics Data System (ADS)

    Spearman, R.; Spracklen, C. T.; Miles, J. H.

    The design of a systolic array processor radar system is examined, and its performance is compared to that of a conventional radar processor. It is shown how systolic arrays can be used to replace the boards of high speed logic normally associated with a high performance radar and to implement all of the normal processing functions associated with such a system. Multifunctional systolic arrays are presented that have the flexibility associated with a general purpose digital processor but the speed associated with fixed function logic arrays.

  18. Interference cancellation in RF signals using adaptive array techniques

    NASA Astrophysics Data System (ADS)

    Brown, Mark E.

    1990-12-01

    This study investigated the effectiveness of the least means squared (LMS) algorithm against various types of common jammers. The LMS algorithm was implemented using the block oriented systems simulator (BOSS). The LMS algorithm was inserted at the output of a two element antenna array. The array was configured so as to have one-half wavelength spacing. A quadrature hybrid signal structure was used. The array was then tested against a barrage and sweep jammer. The barrage jammer testing consisted of varying each of the three available jammer parameters: power, frequency, and angle of arrival individually. The sweep jammer testing consisted of varying each of the three available jammer parameters; power, sweep frequency and angle of arrival individually. The results of the simulation showed the LMS algorithm in combination with the quadrature hybrid was very effective against both the barrage and sweep jammers. It provided a 55 dB null in the barrage jammer cases and a 50 dB null in the sweep jammer case.

  19. Fabrication of Nanohole Array via Nanodot Array Using Simple Self-Assembly Process of Diblock Copolymer

    NASA Astrophysics Data System (ADS)

    Matsuyama, Tsuyoshi; Kawata, Yoshimasa

    2007-06-01

    We present a simple self-assembly process for fabricating a nanohole array via a nanodot array on a glass substrate by dripping ethanol onto the nanodot array. It is found that well-aligned arrays of nanoholes as well as nanodots are formed on the whole surface of the glass. A dot is transformed into a hole, and the alignment of the nanodots strongly reflects that of the nanoholes. We find that the change in the depth of holes agrees well with the change in the surface energy with the ethanol concentration in the aqueous solution. We believe that the interfacial energy between the nanodots and the dripped ethanol causes the transformation from nanodots into nanoholes. The nanohole arrays are directly applicable to molds for nanopatterned media used in high-density near-field optical data storage. The bit data can be stored and read out using probes with small apertures.

  20. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  1. Bayesian nonparametric adaptive control using Gaussian processes.

    PubMed

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  2. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  3. LEO Download Capacity Analysis for a Network of Adaptive Array Ground Stations

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Barott, William C.; Popovic, Zoya; Rondineau, Sebastien; Langley, John; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Steffes, Paul; Mandl, Dan

    2005-01-01

    To lower costs and reduce latency, a network of adaptive array ground stations, distributed across the United States, is considered for the downlink of a polar-orbiting low earth orbiting (LEO) satellite. Assuming the X-band 105 Mbps transmitter of NASA s Earth Observing 1 (EO-1) satellite with a simple line-of-sight propagation model, the average daily download capacity in bits for a network of adaptive array ground stations is compared to that of a single 11 m dish in Poker Flats, Alaska. Each adaptive array ground station is assumed to have multiple steerable antennas, either mechanically steered dishes or phased arrays that are mechanically steered in azimuth and electronically steered in elevation. Phased array technologies that are being developed for this application are the space-fed lens (SFL) and the reflectarray. Optimization of the different boresight directions of the phased arrays within a ground station is shown to significantly increase capacity; for example, this optimization quadruples the capacity for a ground station with eight SFLs. Several networks comprising only two to three ground stations are shown to meet or exceed the capacity of the big dish, Cutting the data rate by half, which saves modem costs and increases the coverage area of each ground station, is shown to increase the average daily capacity of the network for some configurations.

  4. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  5. Adaptive silver films toward bio-array applications

    NASA Astrophysics Data System (ADS)

    Drachev, Vladimir P.; Narasimhan, Meena L.; Yuan, Hsiao-Kuan; Thoreson, Mark D.; Xie, Yong; Davisson, V. J.; Shalaev, Vladimir M.

    2005-03-01

    Adaptive silver films (ASFs) have been studied as a substrate for protein microarrays. Vacuum evaporated silver films fabricated at certain range of evaporation parameters allow fine rearrangement of the silver nanostructure under protein depositions in buffer solution. Proteins restructure and stabilize the ASF to increase the surface-enhanced Raman scattering (SERS) signal from a monolayer of molecules. Preliminary evidence indicates that the adaptive property of the substrates make them appropriate for protein microarray assays. Head-to-head comparisons with two commercial substrates have been performed. Protein binding was quantified on the microarray using the streptavidinCy3/biotinylated goat IgG protein pair. With fluorescence detection, the performance of ASF substrates was comparable with SuperAldehyde and SuperEpoxy substrates. Additionally, the ASF is also a SERS substrate and this provides an additional tool for analysis. It is found that the SERS spectra of the streptavidinCy5 fluorescence reporter bound to true and bound to false sites show distinct difference.

  6. Array Processing for Radar Clutter Reduction and Imaging of Ice-Bed Interface

    NASA Astrophysics Data System (ADS)

    Gogineni, P.; Leuschen, C.; Li, J.; Hoch, A.; Rodriguez-Morales, F.; Ledford, J.; Jezek, K.

    2007-12-01

    A major challenge in sounding of fast-flowing glaciers in Greenland and Antarctica is surface clutter, which masks weak returns from the ice-bed interface. The surface clutter is also a major problem in sounding and imaging sub-surface interfaces on Mars and other planets. We successfully applied array-processing techniques to reduce clutter and image ice-bed interfaces of polar ice sheets. These techniques and tools have potential applications to planetary observations. We developed a radar with array-processing capability to measure thickness of fast-flowing outlet glaciers and image the ice-bed interface. The radar operates over the frequency range from 140 to 160 MHz with about an 800- Watt peak transmit power with transmit and receive antenna arrays. The radar is designed such that pulse width and duration are programmable. The transmit-antenna array is fed with a beamshaping network to obtain low sidelobes. We designed the receiver such that it can process and digitize signals for each element of an eight- channel array. We collected data over several fast-flowing glaciers using a five-element antenna array, limited by available hardpoints to mount antennas, on a Twin Otter aircraft during the 2006 field season and a four-element array on a NASA P-3 aircraft during the 2007 field season. We used both adaptive and non-adaptive signal-processing algorithms to reduce clutter. We collected data over the Jacobshavn Isbrae and other fast-flowing outlet glaciers, and successfully measured the ice thickness and imaged the ice-bed interface. In this paper, we will provide a brief description of the radar, discuss clutter-reduction algorithms, present sample results, and discuss the application of these techniques to planetary observations.

  7. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  8. NORSAR Final Scientific Report Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    SciTech Connect

    Gibbons, S J; Ringdal, F; Harris, D B

    2009-04-16

    Correlation detection is a relatively new approach in seismology that offers significant advantages in increased sensitivity and event screening over standard energy detection algorithms. The basic concept is that a representative event waveform is used as a template (i.e. matched filter) that is correlated against a continuous, possibly multichannel, data stream to detect new occurrences of that same signal. These algorithms are therefore effective at detecting repeating events, such as explosions and aftershocks at a specific location. This final report summarizes the results of a three-year cooperative project undertaken by NORSAR and Lawrence Livermore National Laboratory. The overall objective has been to develop and test a new advanced, automatic approach to seismic detection using waveform correlation. The principal goal is to develop an adaptive processing algorithm. By this we mean that the detector is initiated using a basic set of reference ('master') events to be used in the correlation process, and then an automatic algorithm is applied successively to provide improved performance by extending the set of master events selectively and strategically. These additional master events are generated by an independent, conventional detection system. A periodic analyst review will then be applied to verify the performance and, if necessary, adjust and consolidate the master event set. A primary focus of this project has been the application of waveform correlation techniques to seismic arrays. The basic procedure is to perform correlation on the individual channels, and then stack the correlation traces using zero-delay beam forming. Array methods such as frequency-wavenumber analysis can be applied to this set of correlation traces to help guarantee the validity of detections and lower the detection threshold. In principle, the deployment of correlation detectors against seismically active regions could involve very large numbers of very specific detectors. To

  9. Adaptive processing for enhanced target acquisition

    NASA Astrophysics Data System (ADS)

    Page, Scott F.; Smith, Moira I.; Hickman, Duncan; Bernhardt, Mark; Oxford, William; Watson, Norman; Beath, F.

    2009-05-01

    Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources. This ignores information that may be available through modern mission management systems which could be fused into the detection process in order to provide enhanced performance. By way of an example relating to target detection, this paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation, sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric characteristics of a target in terms of probability density functions. An important consideration in the construction of the target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as strategies for managing poor quality or absent a-priori information.

  10. Contrast Adaptation Implies Two Spatiotemporal Channels but Three Adapting Processes

    ERIC Educational Resources Information Center

    Langley, Keith; Bex, Peter J.

    2007-01-01

    The contrast gain control model of adaptation predicts that the effects of contrast adaptation correlate with contrast sensitivity. This article reports that the effects of high contrast spatiotemporal adaptors are maximum when adapting around 19 Hz, which is a factor of two or more greater than the peak in contrast sensitivity. To explain the…

  11. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Fujisawa, A.; Nagashima, Y.; Hamasaki, M.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; QUEST Team

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis.

  12. Improvement in adaptive nonuniformity correction method with nonlinear model for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Rui, Lai; Yin-Tang, Yang; Qing, Li; Hui-Xin, Zhou

    2009-09-01

    The scene adaptive nonuniformity correction (NUC) technique is commonly used to decrease the fixed pattern noise (FPN) in infrared focal plane arrays (IRFPA). However, the correction precision of existing scene adaptive NUC methods is reduced by the nonlinear response of IRFPA detectors seriously. In this paper, an improved scene adaptive NUC method that employs "S"-curve model to approximate the detector response is presented. The performance of the proposed method is tested with real infrared video sequence, and the experimental results validate that our method can promote the correction precision considerably.

  13. Sonar array processing borrows from geophysics

    SciTech Connect

    Chen, K.

    1989-09-01

    The author reports a recent advance in sonar signal processing that has potential military application. It improves signal extraction by modifying a technique devised by a geophysicist. Sonar signal processing is used to track submarine and surface targets, such as aircraft carriers, oil tankers, and, in commercial applications, schools of fish or sunken treasure. Similar signal-processing techniques help radio astronomers track galaxies, physicians see images of the body interior, and geophysicists map the ocean floor or find oil. This hydrid technique, applied in an experimental system, can help resolve strong signals as well as weak ones in the same step.

  14. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  15. Coping and adaptation process during puerperium

    PubMed Central

    Muñoz de Rodríguez, Lucy; Ruiz de Cárdenas, Carmen Helena

    2012-01-01

    Introduction: The puerperium is a stage that produces changes and adaptations in women, couples and family. Effective coping, during this stage, depends on the relationship between the demands of stressful or difficult situations and the recourses that the puerperal individual has. Roy (2004), in her Middle Range Theory about the Coping and Adaptation Processing, defines Coping as the ''behavioral and cognitive efforts that a person makes to meet the environment demands''. For the puerperal individual, the correct coping is necessary to maintain her physical and mental well being, especially against situations that can be stressful like breastfeeding and return to work. According to Lazarus and Folkman (1986), a resource for coping is to have someone who receives emotional support, informative and / or tangible. Objective: To review the issue of women coping and adaptation during the puerperium stage and the strategies that enhance this adaptation. Methods: search and selection of database articles: Cochrane, Medline, Ovid, ProQuest, Scielo, and Blackwell Synergy. Other sources: unpublished documents by Roy, published books on Roy´s Model, Websites from of international health organizations. Results: the need to recognize the puerperium as a stage that requires comprehensive care is evident, where nurses must be protagonist with the care offered to women and their families, considering the specific demands of this situation and recourses that promote effective coping and the family, education and health services. PMID:24893059

  16. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  17. An adaptive microwave phased array for targeted heating of deep tumours in intact breast: animal study results.

    PubMed

    Fenn, A J; Wolf, G L; Fogle, R M

    1999-01-01

    It has previously been reported in phantoms, that an adaptive radiofrequency phased array can generate deep focused heating distributions without overheating the skin and superficial healthy tissues. The present study involves adaptive microwave phased array hyperthermia tests in animals (rabbits) with and without tumours. The design of the adaptive phased array as applied to the treatment of tumours in intact breast, is described. The adaptive phased array concept uses breast compression and dual-opposing 915 MHz air-cooled waveguide applicators with electronic phase shifters and electric-field feedback, to focus automatically by computer control the microwave radiation in deep tissue. Temperature measurements for a clinical adaptive phased array hyperthermia system demonstrate tissue heating at depth with reduced skin heating.

  18. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  19. Adaptive memory: animacy processing produces mnemonic advantages.

    PubMed

    VanArsdall, Joshua E; Nairne, James S; Pandeirada, Josefa N S; Blunt, Janell R

    2013-01-01

    It is adaptive to remember animates, particularly animate agents, because they play an important role in survival and reproduction. Yet, surprisingly, the role of animacy in mnemonic processing has received little direct attention in the literature. In two experiments, participants were presented with pronounceable nonwords and properties characteristic of either living (animate) or nonliving (inanimate) things. The task was to rate the likelihood that each nonword-property pair represented a living thing or a nonliving object. In Experiment 1, a subsequent recognition memory test for the nonwords revealed a significant advantage for the nonwords paired with properties of living things. To generalize this finding, Experiment 2 replicated the animate advantage using free recall. These data demonstrate a new phenomenon in the memory literature - a possible mnemonic tuning for animacy - and add to growing data supporting adaptive memory theory. PMID:23261948

  20. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  1. Optoelectronic implementation of a 256-channel sonar adaptive-array processor.

    PubMed

    Silveira, Paulo E X; Pati, Gour S; Wagner, Kelvin H

    2004-12-10

    We present an optoelectronic implementation of an adaptive-array processor that is capable of performing beam forming and jammer nulling in signals of wide fractional bandwidth that are detected by an array of arbitrary topology. The optical system makes use of a two-dimensional scrolling spatial light modulator to represent an array of input signals in 256 tapped delay lines, two acousto-optic modulators for modulating the feedback error signal, and a photorefractive crystal for representing the adaptive weights as holographic gratings. Gradient-descent learning is used to dynamically adapt the holographic weights to optimally form multiple beams and to null out multiple interference sources, either in the near field or in the far field. Space-integration followed by differential heterodyne detection is used for generating the system's output. The processor is analyzed to show the effects of exponential weight decay on the optimum solution and on the convergence conditions. Several experimental results are presented that validate the system's capacity for broadband beam forming and jammer nulling for linear and circular arrays.

  2. 50 years of progress in microphone arrays for speech processing

    NASA Astrophysics Data System (ADS)

    Elko, Gary W.; Frisk, George V.

    2004-10-01

    In the early 1980s, Jim Flanagan had a dream of covering the walls of a room with microphones. He occasionally referred to this concept as acoustic wallpaper. Being a new graduate in the field of acoustics and signal processing, it was fortunate that Bell Labs was looking for someone to investigate this area of microphone arrays for telecommunication. The job interview was exciting, with all of the big names in speech signal processing and acoustics sitting in the audience, many of whom were the authors of books and articles that were seminal contributions to the fields of acoustics and signal processing. If there ever was an opportunity of a lifetime, this was it. Fortunately, some of the work had already begun, and Sessler and West had already laid the groundwork for directional electret microphones. This talk will describe some of the very early work done at Bell Labs on microphone arrays and reflect on some of the many systems, from large 400-element arrays, to small two-microphone arrays. These microphone array systems were built under Jim Flanagan's leadership in an attempt to realize his vision of seamless hands-free speech communication between people and the communication of people with machines.

  3. Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array

    NASA Astrophysics Data System (ADS)

    Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert

    2002-05-01

    A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.

  4. Generic nano-imprint process for fabrication of nanowire arrays.

    PubMed

    Pierret, Aurélie; Hocevar, Moïra; Diedenhofen, Silke L; Algra, Rienk E; Vlieg, E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 degrees C for InP and 700 degrees C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  5. Generic nano-imprint process for fabrication of nanowire arrays

    NASA Astrophysics Data System (ADS)

    Pierret, Aurélie; Hocevar, Moïra; Diedenhofen, Silke L.; Algra, Rienk E.; Vlieg, E.; Timmering, Eugene C.; Verschuuren, Marc A.; Immink, George W. G.; Verheijen, Marcel A.; Bakkers, Erik P. A. M.

    2010-02-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 °C for InP and 700 °C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  6. Parallel Processing of Large Scale Microphone Arrays for Sound Capture

    NASA Astrophysics Data System (ADS)

    Jan, Ea-Ee.

    1995-01-01

    Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be

  7. A general framework for adaptive processing of data structures.

    PubMed

    Frasconi, P; Gori, M; Sperduti, A

    1998-01-01

    A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical formalism for representing this class of adaptive transductions by means of recursive networks, i.e., cyclic graphs where nodes are labeled by variables and edges are labeled by generalized delay elements. This representation makes it possible to incorporate the symbolic and subsymbolic nature of data. Structures are processed by unfolding the recursive network into an acyclic graph called encoding network. In so doing, inference and learning algorithms can be easily inherited from the corresponding algorithms for artificial neural networks or probabilistic graphical model.

  8. Adaptive optics wavefront sensors based on photon-counting detector arrays

    NASA Astrophysics Data System (ADS)

    Aull, Brian F.; Schuette, Daniel R.; Reich, Robert K.; Johnson, Robert L.

    2010-07-01

    For adaptive optics systems, there is a growing demand for wavefront sensors that operate at higher frame rates and with more pixels while maintaining low readout noise. Lincoln Laboratory has been investigating Geiger-mode avalanche photodiode arrays integrated with CMOS readout circuits as a potential solution. This type of sensor counts photons digitally within the pixel, enabling data to be read out at high rates without the penalty of readout noise. After a brief overview of adaptive optics sensor development at Lincoln Laboratory, we will present the status of silicon Geigermode- APD technology along with future plans to improve performance.

  9. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  10. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  11. Parameter adaptive estimation of random processes

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Vanlandingham, H. F.

    1975-01-01

    This paper is concerned with the parameter adaptive least squares estimation of random processes. The main result is a general representation theorem for the conditional expectation of a random variable on a product probability space. Using this theorem along with the general likelihood ratio expression, the least squares estimate of the process is found in terms of the parameter conditioned estimates. The stochastic differential for the a posteriori probability and the stochastic differential equation for the a posteriori density are found by using simple stochastic calculus on the representations obtained. The results are specialized to the case when the parameter has a discrete distribution. The results can be used to construct an implementable recursive estimator for certain types of nonlinear filtering problems. This is illustrated by some simple examples.

  12. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  13. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach. PMID:24116535

  14. Design and programming of systolic array cells for signal processing

    SciTech Connect

    Smith, R.A.W.

    1989-01-01

    This thesis presents a new methodology for the design, simulation, and programming of systolic arrays in which the algorithms and architecture are simultaneously optimized. The algorithms determine the initial architecture, and simulation is used to optimize the architecture. The simulator provides a register-transfer level model of a complete systolic array computation. To establish the validity of this design methodology two novel programmable systolic array cells were designed and programmed. The cells were targeted for applications in high-speed signal processing and associated matrix computations. A two-chip programmable systolic array cell using a 16-bit multiplier-accumulator chip and a semi-custom VLSI controller chip was designed and fabricated. A low chip count allows large arrays to be constructed, but the cell is flexible enough to be a building-block for either one- or two-dimensional systolic arrays. Another more flexible and powerful cell using a 32-bit floating-point processor and a second VLSI controller chip was also designed. It contains several architectural features that are unique in a systolic array cell: (1) each instruction is 32 bits, yet all resources can be updated every cycle, (2) two on-chip interchangeable memories are used, and (3) one input port can be used as either a global or local port. The key issues involved in programming the cells are analyzed in detail. A set of modules is developed which can be used to construct large programs in an effective manner. The utility of this programming approach is demonstrated with several important examples.

  15. Processing difficulties and instability of carbohydrate microneedle arrays

    PubMed Central

    Donnelly, Ryan F.; Morrow, Desmond I.J.; Singh, Thakur R.R.; Migalska, Katarzyna; McCarron, Paul A.; O’Mahony, Conor; Woolfson, A. David

    2010-01-01

    Background A number of reports have suggested that many of the problems currently associated with the use of microneedle (MN) arrays for transdermal drug delivery could be addressed by using drug-loaded MN arrays prepared by moulding hot melts of carbohydrate materials. Methods In this study, we explored the processing, handling, and storage of MN arrays prepared from galactose with a view to clinical application. Results Galactose required a high processing temperature (160°C), and molten galactose was difficult to work with. Substantial losses of the model drugs 5-aminolevulinic acid (ALA) and bovine serum albumin were incurred during processing. While relatively small forces caused significant reductions in MN height when applied to an aluminium block, this was not observed during their relatively facile insertion into heat-stripped epidermis. Drug release experiments using ALA-loaded MN arrays revealed that less than 0.05% of the total drug loading was released across a model silicone membrane. Similarly, only low amounts of ALA (approximately 0.13%) and undetectable amounts of bovine serum albumin were delivered when galactose arrays were combined with aqueous vehicles. Microscopic inspection of the membrane following release studies revealed that no holes could be observed in the membrane, indicating that the partially dissolved galactose sealed the MN-induced holes, thus limiting drug delivery. Indeed, depth penetration studies into excised porcine skin revealed that there was no significant increase in ALA delivery using galactose MN arrays, compared to control (P value < 0.05). Galactose MNs were unstable at ambient relative humidities and became adhesive. Conclusion The processing difficulties and instability encountered in this study are likely to preclude successful clinical application of carbohydrate MNs. The findings of this study are of particular importance to those in the pharmaceutical industry involved in the design and formulation of

  16. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  17. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  18. Analysis and design of a high power laser adaptive phased array transmitter

    NASA Technical Reports Server (NTRS)

    Mevers, G. E.; Soohoo, J. F.; Winocur, J.; Massie, N. A.; Southwell, W. H.; Brandewie, R. A.; Hayes, C. L.

    1977-01-01

    The feasibility of delivering substantial quantities of optical power to a satellite in low earth orbit from a ground based high energy laser (HEL) coupled to an adaptive antenna was investigated. Diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming were examined. To evaluate possible HEL sources, atmospheric investigations were performed for the CO2, (C-12)(O-18)2 isotope, CO and DF wavelengths using output antenna locations of both sea level and mountain top. Results indicate that both excellent atmospheric and adaption efficiency can be obtained for mountain top operation with a micron isotope laser operating at 9.1 um, or a CO laser operating single line (P10) at about 5.0 (C-12)(O-18)2um, which was a close second in the evaluation. Four adaptive power transmitter system concepts were generated and evaluated, based on overall system efficiency, reliability, size and weight, advanced technology requirements and potential cost. A multiple source phased array was selected for detailed conceptual design. The system uses a unique adaption technique of phase locking independent laser oscillators which allows it to be both relatively inexpensive and most reliable with a predicted overall power transfer efficiency of 53%.

  19. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    ERIC Educational Resources Information Center

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  20. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable. PMID:26737215

  1. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  2. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel. PMID:23319710

  3. Adaptive non-uniformity correction method based on temperature for infrared detector array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo

    2013-09-01

    The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.

  4. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  5. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  6. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  7. Multiplexed optical operation of nanoelectromechanical systems (NEMS) arrays for sensing and signal-processing applications

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin

    2014-06-01

    NEMS are rapidly being developed for a variety of sensing applications as well as for exploring interesting regimes in fundamental physics. In most of these endeavors, operation of a NEMS device involves actuating the device harmonically around its fundamental resonance and detecting subsequent motion while the device interacts with its environment. Even though a single NEMS resonator is exceptionally sensitive, a typical application, such as sensing or signal processing, requires the detection of signals from many resonators distributed over the surface of a chip. Therefore, one of the key technological challenges in the field of NEMS is development of multiplexed measurement techniques to detect the motion of a large number of NEMS resonators simultaneously. In this work, we address the important and difficult problem of interfacing with a large number of NEMS devices and facilitating the use of such arrays in, for example, sensing and signal processing applications. We report a versatile, all-optical technique to excite and read-out a distributed NEMS array. The NEMS array is driven by a distributed, intensity-modulated, optical pump through the photothermal effect. The ensuing vibrational response of the array is multiplexed onto a single, probe beam as a high-frequency phase modulation. The phase modulation is optically down converted to a low-frequency, intensity modulation using an adaptive full -field interferometer, and subsequently is detected using a charge-coupled device (CCD) array. Rapid and single-step mechanical characterization of approximately 60 nominally identical, high-frequency resonators is demonstrated. The technique may enable sensitivity improvements over single NEMS resonators by averaging signals coming from a multitude of devices in the array. In addition, the diffraction-limited spatial resolution may allow for position-dependent read-out of NEMS sensor chips for sensing multiple analytes or spatially inhomogeneous forces.

  8. Flood adaptive traits and processes: an overview.

    PubMed

    Voesenek, Laurentius A C J; Bailey-Serres, Julia

    2015-04-01

    Unanticipated flooding challenges plant growth and fitness in natural and agricultural ecosystems. Here we describe mechanisms of developmental plasticity and metabolic modulation that underpin adaptive traits and acclimation responses to waterlogging of root systems and submergence of aerial tissues. This includes insights into processes that enhance ventilation of submerged organs. At the intersection between metabolism and growth, submergence survival strategies have evolved involving an ethylene-driven and gibberellin-enhanced module that regulates growth of submerged organs. Opposing regulation of this pathway is facilitated by a subgroup of ethylene-response transcription factors (ERFs), which include members that require low O₂ or low nitric oxide (NO) conditions for their stabilization. These transcription factors control genes encoding enzymes required for anaerobic metabolism as well as proteins that fine-tune their function in transcription and turnover. Other mechanisms that control metabolism and growth at seed, seedling and mature stages under flooding conditions are reviewed, as well as findings demonstrating that true endurance of submergence includes an ability to restore growth following the deluge. Finally, we highlight molecular insights obtained from natural variation of domesticated and wild species that occupy different hydrological niches, emphasizing the value of understanding natural flooding survival strategies in efforts to stabilize crop yields in flood-prone environments.

  9. Control algorithms of liquid crystal phased arrays used as adaptive optic correctors

    NASA Astrophysics Data System (ADS)

    Dayton, David; Gonglewski, John; Browne, Stephen

    2006-08-01

    Multi-segment liquid crystal phased arrays have been demonstrated as adaptive optics elements for correction of atmospheric turbulence. High speed dual-frequency nematic liquid crystal has sufficient bandwidth to keep up with moderate atmospheric Greenwood frequencies. However the segmented piston correction only spatial nature of the devices requires novel approaches to control algorithms especially when used with Shack-Hartmann wave front sensors. In this presentation we explore approaches and their effects on closed loop Strehl ratios. A Zernike modal based approach has produced the best results. The presentation will contain results from experiments with a Meadowlark optics liquid crystal device.

  10. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  11. Adaptive optics for array telescopes using piston-and-tilt wave-front sensing

    NASA Technical Reports Server (NTRS)

    Wizinowich, P.; Mcleod, B.; Lloyd-Yhart, M.; Angel, J. R. P.; Colucci, D.; Dekany, R.; Mccarthy, D.; Wittman, D.; Scott-Fleming, I.

    1992-01-01

    A near-infrared adaptive optics system operating at about 50 Hz has been used to control phase errors adaptively between two mirrors of the Multiple Mirror Telescope by stabilizing the position of the interference fringe in the combined unresolved far-field image. The resultant integrated images have angular resolutions of better than 0.1 arcsec and fringe contrasts of more than 0.6. Measurements of wave-front tilt have confirmed the wavelength independence of image motion. These results show that interferometric sensing of phase errors, when combined with a system for sensing the wave-front tilt of the individual telescopes, will provide a means of achieving a stable diffraction-limited focus with segmented telescopes or arrays of telescopes.

  12. Post-digital image processing based on microlens array

    NASA Astrophysics Data System (ADS)

    Shi, Chaiyuan; Xu, Feng

    2014-10-01

    Benefit from the attractive features such as compact volume, thin and lightweight, the imaging systems based on microlens array have become an active area of research. However, current imaging systems based on microlens array have insufficient imaging quality so that it cannot meet the practical requirements in most applications. As a result, the post-digital image processing for image reconstruction from the low-resolution sub-image sequence becomes particularly important. In general, the post-digital image processing mainly includes two parts: the accurate estimation of the motion parameters between the sub-image sequence and the reconstruction of high resolution image. In this paper, given the fact that the preprocessing of the unit image can make the edge of the reconstructed high-resolution image clearer, the low-resolution images are preprocessed before the post-digital image processing. Then, after the processing of the pixel rearrange method, a high-resolution image is obtained. From the result, we find that the edge of the reconstructed high-resolution image is clearer than that without preprocessing.

  13. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  14. [Cyclic interactions in the processes of adaptation regulation].

    PubMed

    Vasilevskiĭ, N N; Aleksandrova, Zh G; Suvorov, N B

    1989-01-01

    Human adaptation is characterised by essential changes of functional systems biorhythms which appears in the changes of their components' sequence and in the dynamics of biorhythmological cycles. These objective laws, having been described for human EEG, allow to discern clearly the individual-typological peculiarities in man with different stages of adaptation, the same as the adaptive shifts during long-term influence of external factors. The cyclic course of adaptative processes is regarded as a measure of adaptability. With the help of biorhythmic multitude the memory is constantly satiated from the brain by discrete portions of adaptogenic information, which prevents the natural processes of the memory disintegration. PMID:2816008

  15. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  16. Transmission mode adaptive beamforming for planar phased arrays and its application to 3D ultrasonic transcranial imaging

    NASA Astrophysics Data System (ADS)

    Shapoori, Kiyanoosh; Sadler, Jeffrey; Wydra, Adrian; Malyarenko, Eugene; Sinclair, Anthony; Maev, Roman G.

    2013-03-01

    A new adaptive beamforming method for accurately focusing ultrasound behind highly scattering layers of human skull and its application to 3D transcranial imaging via small-aperture planar phased arrays are reported. Due to its undulating, inhomogeneous, porous, and highly attenuative structure, human skull bone severely distorts ultrasonic beams produced by conventional focusing methods in both imaging and therapeutic applications. Strong acoustical mismatch between the skull and brain tissues, in addition to the skull's undulating topology across the active area of a planar ultrasonic probe, could cause multiple reflections and unpredictable refraction during beamforming and imaging processes. Such effects could significantly deflect the probe's beam from the intended focal point. Presented here is a theoretical basis and simulation results of an adaptive beamforming method that compensates for the latter effects in transmission mode, accompanied by experimental verification. The probe is a custom-designed 2 MHz, 256-element matrix array with 0.45 mm element size and 0.1mm kerf. Through its small footprint, it is possible to accurately measure the profile of the skull segment in contact with the probe and feed the results into our ray tracing program. The latter calculates the new time delay patterns adapted to the geometrical and acoustical properties of the skull phantom segment in contact with the probe. The time delay patterns correct for the refraction at the skull-brain boundary and bring the distorted beam back to its intended focus. The algorithms were implemented on the ultrasound open-platform ULA-OP (developed at the University of Florence).

  17. Solution processed semiconductor alloy nanowire arrays for optoelectronic applications

    NASA Astrophysics Data System (ADS)

    Shimpi, Paresh R.

    In this dissertation, we use ZnO nanowire as a model system to investigate the potential of solution routes for bandgap engineering in semiconductor nanowires. Excitingly, successful Mg-alloying into ZnO nanowire arrays has been achieved using a two-step sequential hydrothermal method at low temperature (<155°C) without using post-annealing process. Evidently, both room temperature and 40 K photoluminescence (PL) spectroscopy revealed enhanced and blue-shifted near-band-edge ultraviolet (NBE UV) emission in the Mg-alloyed ZnO (ZnMgO) nanowire arrays, compared with ZnO nanowires. The specific template of densely packed ZnO nanowires is found to be instrumental in achieving the Mg alloying in low temperature solution process. By optimizing the density of ZnO nanowires and precursor concentration, 8-10 at.% of Mg content has been achieved in ZnMgO nanowires. Post-annealing treatment is conducted in oxygen-rich and oxygen-deficient environment at different temperatures and time durations on silicon and quartz substrates in order to study the structural and optical property evolution in ZnMgO nanowire arrays. Vacuum annealed ZnMgO nanowires on both substrates retained their hexagonal structures and PL results showed the enhanced but red-shifted NBE UV emission compared to ZnO nanowires with visible emission nearly suppressed, suggesting the reduced defects concentration and improvement in crystallinity of the nanowires. On the contrast, for ambient annealed ZnMgO nanowires on silicon substrate, as the annealing temperature increased from 400°C to 900°C, intensity of visible emission peak across blue-green-yellow-red band (˜400-660 nm) increased whereas intensity of NBE UV peak decreased and completely got quenched. This might be due to interface diffusion of oxidized Si (SiOx) and formation of (Zn,Mg)1.7SiO4 epitaxially overcoated around individual ZnMgO nanowire. On the other hand, ambient annealed ZnMgO nanowires grown on quartz showed a ˜6-10 nm blue-shift in

  18. SAR processing with stepped chirps and phased array antennas.

    SciTech Connect

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  19. Quantitative genetic study of the adaptive process.

    PubMed

    Shaw, R G; Shaw, F H

    2014-01-01

    The additive genetic variance with respect to absolute fitness, VA(W), divided by mean absolute fitness, , sets the rate of ongoing adaptation. Fisher's key insight yielding this quantitative prediction of adaptive evolution, known as the Fundamental Theorem of Natural Selection, is well appreciated by evolutionists. Nevertheless, extremely scant information about VA(W) is available for natural populations. Consequently, the capacity for fitness increase via natural selection is unknown. Particularly in the current context of rapid environmental change, which is likely to reduce fitness directly and, consequently, the size and persistence of populations, the urgency of advancing understanding of immediate adaptive capacity is extreme. We here explore reasons for the dearth of empirical information about VA(W), despite its theoretical renown and critical evolutionary role. Of these reasons, we suggest that expectations that VA(W) is negligible, in general, together with severe statistical challenges of estimating it, may largely account for the limited empirical emphasis on it. To develop insight into the dynamics of VA(W) in a changing environment, we have conducted individual-based genetically explicit simulations. We show that, as optimizing selection on a trait changes steadily over generations, VA(W) can grow considerably, supporting more rapid adaptation than would the VA(W) of the base population. We call for direct evaluation of VA(W) and in support of prediction of rates adaptive evolution, and we advocate for the use of aster modeling as a rigorous basis for achieving this goal.

  20. Enhanced Processing for a Towed Array Using an Optimal Noise Canceling Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2005-07-21

    Noise self-generated by a surface ship towing an array in search of a weak target presents a major problem for the signal processing especially if broadband techniques are being employed. In this paper we discuss the development and application of an adaptive noise canceling processor capable of extracting the weak far-field acoustic target in a noisy ocean acoustic environment. The fundamental idea for this processor is to use a model-based approach incorporating both target and ship noise. Here we briefly describe the underlying theory and then demonstrate through simulation how effective the canceller and target enhancer perform. The adaptivity of the processor not only enables the ''tracking'' of the canceller coefficients, but also the estimation of target parameters for localization. This approach which is termed ''joint'' cancellation and enhancement produces the optimal estimate of both in a minimum (error) variance sense.

  1. Room geometry inference based on spherical microphone array eigenbeam processing.

    PubMed

    Mabande, Edwin; Kowalczyk, Konrad; Sun, Haohai; Kellermann, Walter

    2013-10-01

    The knowledge of parameters characterizing an acoustic environment, such as the geometric information about a room, can be used to enhance the performance of several audio applications. In this paper, a novel method for three-dimensional room geometry inference based on robust and high-resolution beamforming techniques for spherical microphone arrays is presented. Unlike other approaches that are based on the measurement and processing of multiple room impulse responses, here, microphone array signal processing techniques for uncontrolled broadband acoustic signals are applied. First, the directions of arrival (DOAs) and time differences of arrival (TDOAs) of the direct signal and room reflections are estimated using high-resolution robust broadband beamforming techniques and cross-correlation analysis. In this context, the main challenges include the low reflected-signal to background-noise power ratio, the low energy of reflected signals relative to the direct signal, and their strong correlation with the direct signal and among each other. Second, the DOA and TDOA information is combined to infer the room geometry using geometric relations. The high accuracy of the proposed room geometry inference technique is confirmed by experimental evaluations based on both simulated and measured data for moderately reverberant rooms. PMID:24116416

  2. A systematic process for adaptive concept exploration

    NASA Astrophysics Data System (ADS)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  3. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  4. A facile processing way of silica needle arrays with tunable orientation by tube arrays fabrication and etching method

    SciTech Connect

    Zhu Mingwei; Gao Haigen; Li Hongwei; Xu Jiao; Chen Yanfeng

    2010-03-15

    A simple method to fabricate silica micro/nano-needle arrays (SNAs) is presented based on tube-etching mechanism. Using silica fibers as templates, highly aligned and free-standing needle arrays are created over large area by simple processes of polymer infiltration, cutting, chemical etching and polymer removal. Their sizes and orientations can be arbitrarily and precisely tuned by simply selecting fiber sizes and the cutting directions, respectively. This technique enables the needle arrays with special morphology to be fabricated in a greatly facile way, thereby offers them the potentials in various applications, such as optic, energy harvesting, sensors, etc. As a demonstration, the super hydrophobic property of PDMS treated SNAs is examined. - Graphical abstract: Silica needle arrays are fabricated by tube arrays fabrication and etching method. They show super hydrophobic property after being treated with PDMS.

  5. High-resolution optical coherence tomography using self-adaptive FFT and array detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yonghua; Chen, Zhongping; Xiang, Shaohua; Ding, Zhihua; Ren, Hongwu; Nelson, J. Stuart; Ranka, Jinendra K.; Windeler, Robert S.; Stentz, Andrew J.

    2001-05-01

    We developed a novel optical coherence tomographic (OCT) system which utilized broadband continuum generation for high axial resolution and a high numeric-aperture (N.A.) Objective for high lateral resolution (<5 micrometers ). The optimal focusing point was dynamically compensated during axial scanning so that it can be kept at the same position as the point that has an equal optical path length as that in the reference arm. This gives us uniform focusing size (<5 mum) at different depths. A new self-adaptive fast Fourier transform (FFT) algorithm was developed to digitally demodulate the interference fringes. The system employed a four-channel detector array for speckle reduction that significantly improved the image's signal-to-noise ratio.

  6. Adaptive Array for Weak Interfering Signals: Geostationary Satellite Experiments. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Steadman, Karl

    1989-01-01

    The performance of an experimental adaptive array is evaluated using signals from an existing geostationary satellite interference environment. To do this, an earth station antenna was built to receive signals from various geostationary satellites. In these experiments the received signals have a frequency of approximately 4 GHz (C-band) and have a bandwidth of over 35 MHz. These signals are downconverted to a 69 MHz intermediate frequency in the experimental system. Using the downconverted signals, the performance of the experimental system for various signal scenarios is evaluated. In this situation, due to the inherent thermal noise, qualitative instead of quantitative test results are presented. It is shown that the experimental system can null up to two interfering signals well below the noise level. However, to avoid the cancellation of the desired signal, the use a steering vector is needed. Various methods to obtain an estimate of the steering vector are proposed.

  7. Performance of a modified feedback loop adaptive array with TVRO satellite signals

    NASA Technical Reports Server (NTRS)

    Steadman, K.; Gupta, I. J.; Walton, E. K.

    1990-01-01

    The performance of an experimental adaptive antenna array system is evaluated using television-receive-only (TVRO) satellite signals. The experimental system is a sidelobe canceler with two auxiliary channels. Modified feedback loops are used to enhance the suppression of weak interfering signals. The modified feedback loops use two spatially separate antennas, each with an individual amplifier for each auxiliary channel. Thus, the experimental system uses five antenna elements. Instead of using five separate antennas, a reflector antenna with multiple feeds is used to receive signals from various TVRO satellites. The details of the earth station are given. It is shown that the experimental system can null up to two signals originating from interfering TVRO satellites while receiving the signals from a desired TVRO satellite.

  8. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  9. Adaptive Processes in Thalamus and Cortex Revealed by Silencing of Primary Visual Cortex during Contrast Adaptation.

    PubMed

    King, Jillian L; Lowe, Matthew P; Stover, Kurt R; Wong, Aimee A; Crowder, Nathan A

    2016-05-23

    Visual adaptation illusions indicate that our perception is influenced not only by the current stimulus but also by what we have seen in the recent past. Adaptation to stimulus contrast (the relative luminance created by edges or contours in a scene) induces the perception of the stimulus fading away and increases the contrast detection threshold in psychophysical tests [1, 2]. Neural correlates of contrast adaptation have been described throughout the visual system including the retina [3], dorsal lateral geniculate nucleus (dLGN) [4, 5], primary visual cortex (V1) [6], and parietal cortex [7]. The apparent ubiquity of adaptation at all stages raises the question of how this process cascades across brain regions [8]. Focusing on V1, adaptation could be inherited from pre-cortical stages, arise from synaptic depression at the thalamo-cortical synapse [9], or develop locally, but what is the weighting of these contributions? Because contrast adaptation in mouse V1 is similar to classical animal models [10, 11], we took advantage of the optogenetic tools available in mice to disentangle the processes contributing to adaptation in V1. We disrupted cortical adaptation by optogenetically silencing V1 and found that adaptation measured in V1 now resembled that observed in dLGN. Thus, the majority of adaptation seen in V1 neurons arises through local activity-dependent processes, with smaller contributions from dLGN inheritance and synaptic depression at the thalamo-cortical synapse. Furthermore, modeling indicates that divisive scaling of the weakly adapted dLGN input can predict some of the emerging features of V1 adaptation.

  10. Locally adaptive regression filter-based infrared focal plane array non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Li, Jia; Qin, Hanlin; Yan, Xiang; Huang, He; Zhao, Yingjuan; Zhou, Huixin

    2015-10-01

    Due to the limitations of the manufacturing technology, the response rates to the same infrared radiation intensity in each infrared detector unit are not identical. As a result, the non-uniformity of infrared focal plane array, also known as fixed pattern noise (FPN), is generated. To solve this problem, correcting the non-uniformity in infrared image is a promising approach, and many non-uniformity correction (NUC) methods have been proposed. However, they have some defects such as slow convergence, ghosting and scene degradation. To overcome these defects, a novel non-uniformity correction method based on locally adaptive regression filter is proposed. First, locally adaptive regression method is used to separate the infrared image into base layer containing main scene information and the detail layer containing detailed scene with FPN. Then, the detail layer sequence is filtered by non-linear temporal filter to obtain the non-uniformity. Finally, the high quality infrared image is obtained by subtracting non-uniformity component from original image. The experimental results show that the proposed method can significantly eliminate the ghosting and the scene degradation. The results of correction are superior to the THPF-NUC and NN-NUC in the aspects of subjective visual and objective evaluation index.

  11. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  12. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  13. Adaptive Memory: Is Survival Processing Special?

    ERIC Educational Resources Information Center

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  14. Array processing of teleseismic body waves with the USArray

    NASA Astrophysics Data System (ADS)

    Pavlis, Gary L.; Vernon, Frank L.

    2010-07-01

    We introduce a novel method of array processing for measuring arrival times and relative amplitudes of teleseismic body waves recorded on large aperture seismic arrays. The algorithm uses a robust stacking algorithm with three features: (1) an initial 'reference' signal is required for initial alignment by cross-correlation; (2) a robust stacking method is used that penalizes signals that are not well matched to the stack; and (3) an iterative procedure alternates between cross-correlation with the current stack and the robust stacking algorithm. This procedure always converges in a few iterations making it well suited for interactive processing. We describe concepts behind a graphical interface developed to utilize this algorithm for processing body waves. We found it was important to compute several data quality metrics and allow the analyst to sort on these metrics. This is combined with a 'pick cutoff' function that simplifies data editing. Application of the algorithm to data from the USArray show four features of this method. (1) The program can produce superior results to that produced by a skilled analyst in approximately 1/5 of the time required for conventional interactive picking. (2) We show an illustrative example comparing residuals from S and SS for an event from northern Chile. The SS data show a remarkable ±10 s residual pattern across the USArray that we argue is caused by propagation approximately parallel to the subduction zones in Central and South America. (3) Quality metrics were found to be useful in identifying data problems. (4) We analyzed 50 events from the Tonga-Fiji region to compare residuals produced by this new algorithm with those measured by interactive picking. Both sets of residuals are approximately normally distributed, but corrupted by about 5% outliers. The scatter of the data estimated by waveform correlation was found to be approximately 1/2 that of the hand picked data. The outlier populations of both data sets are

  15. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  16. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  17. Forward Interference Avoidance in Ad Hoc Communications Using Adaptive Array Antennas

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Tomofumi; Kamiya, Yukihiro; Fujii, Takeo; Suzuki, Yasuo

    Wireless ad hoc communications such as ad hoc networks have been attracting researchers' attention. They are expected to become a key technology for “ubiquitous” networking because of the ability to configure wireless links by nodes autonomously, without any centralized control facilities. Adaptive array antennas (AAA) have been expected to improve the network efficiency by taking advantage of its adaptive beamforming capability. However, it should be noted that AAA is not almighty. Its interference cancellation capability is limited by the degree-of-freedom (DOF) and the angular resolution as a function of the number of element antennas. Application of AAA without attending to these problems can degrade the efficiency of the network. Let us consider wireless ad hoc communication as a target application for AAA, taking advantage of AAA's interference cancellation capability. The low DOF and insufficient resolution will be crucial problems compared to other wireless systems, since there is no centralized facility to control the nodes to avoid interferences in such systems. A number of interferences might impinge on a node from any direction of arrival (DOA) without any timing control. In this paper, focusing on such limitations of AAA applied in ad hoc communications, we propose a new scheme, Forward Interference Avoidance (FIA), using AAA for ad hoc communications in order to avoid problems caused by the limitation of the AAA capability. It enables nodes to avoid interfering with other nodes so that it increases the number of co-existent wireless links. The performance improvement of ad hoc communications in terms of the number of co-existent links is investigated through computer simulations.

  18. Damage Detection in Composite Structures with Wavenumber Array Data Processing

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Guided ultrasonic waves (GUW) have the potential to be an efficient and cost-effective method for rapid damage detection and quantification of large structures. Attractive features include sensitivity to a variety of damage types and the capability of traveling relatively long distances. They have proven to be an efficient approach for crack detection and localization in isotropic materials. However, techniques must be pushed beyond isotropic materials in order to be valid for composite aircraft components. This paper presents our study on GUW propagation and interaction with delamination damage in composite structures using wavenumber array data processing, together with advanced wave propagation simulations. Parallel elastodynamic finite integration technique (EFIT) is used for the example simulations. Multi-dimensional Fourier transform is used to convert time-space wavefield data into frequency-wavenumber domain. Wave propagation in the wavenumber-frequency domain shows clear distinction among the guided wave modes that are present. This allows for extracting a guided wave mode through filtering and reconstruction techniques. Presence of delamination causes spectral change accordingly. Results from 3D CFRP guided wave simulations with delamination damage in flat-plate specimens are used for wave interaction with structural defect study.

  19. A Systolic Array Architecture For Processing Sonar Narrowband Signals

    NASA Astrophysics Data System (ADS)

    Mintzer, L.

    1988-07-01

    Modern sonars relay more upon visual rather than aural contacts. Lofargrams presenting a time history of hydrophone spectral content are standard means of observing narrowband signals. However, the frequency signal "tracks" are often embedded in noise, sometimes rendering their detection difficult and time consuming. Image enhancement algorithms applied to the 'grams can yield improvements in target data presented to the observer. A systolic array based on the NCR Geometric Arithmetic Parallel Processor (GAPP), a CMOS chip that contains 72 single bit processors controlled in parallel, has been designed for evaluating image enhancement algorithms. With the processing nodes of the GAPP bearing a one-to-one correspondence with the pixels displayed on the 'gram, a very efficient SIMD architecture is realized. The low data rate of sonar displays, i.e., one line of 1000-4000 pixels per second, and the 10-MHz control clock of the GAPP provide the possibility of 107 operations per pixel in real time applications. However, this architecture cannot handle data-dependent operations efficiently. To this end a companion processor capable of efficiently executing branch operations has been designed. A simple spoke filter is simulated and applied to laboratory data with noticeable improvements in the resulting lofargram display.

  20. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  1. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  2. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    NASA Astrophysics Data System (ADS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Desheng Meng, Dennis

    2011-08-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA.

  3. Adaptive constructive processes and the future of memory

    PubMed Central

    Schacter, Daniel L.

    2013-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PMID:23163437

  4. Proceedings of the array signal processing symposium: Treaty Verification Program

    SciTech Connect

    Harris, D.B.

    1988-02-01

    A common theme underlying the research these groups conduct is the use of propagating waves to detect, locate, image or otherwise identify features of the environment significant to their applications. The applications considered in this symposium are verification of nuclear test ban treaties, non-destructive evaluation (NDE) of manufactured components, and sonar and electromagnetic target acquisition and tracking. These proceedings cover just the first two topics. In these applications, arrays of sensors are used to detect propagating waves and to measure the characteristics that permit interpretation. The reason for using sensors arrays, which are inherently more expensive than single sensor systems, is twofold. By combining the signals from multiple sensors, it is usually possible to suppress unwanted noise, which permtis detection and analysis of waker signals. Secondly, in complicated situations in which many waves are present, arrays make it possible to separate the waves and to measure their individual characteristics (direction, velocity, etc.). Other systems (such as three-component sensors in the seismic application) can perform these functions to some extent, but none are so effective and versatile as arrays. The objectives of test ban treaty verification are to detect, locate and identify underground nuclear explosions, and to discriminate them from earthquakes and conventional chemical explosions. Two physical modes of treaty verification are considered: monitoring with arrays of seismic stations (solid earth propagation), and monitoring with arrays of acoustic (infrasound) stations (atmospheric propagation). The majority of the presentations represented in these proceeding address various aspects of the seismic verification problem.

  5. A High-Speed Adaptively-Biased Current-to-Current Front-End for SSPM Arrays

    NASA Astrophysics Data System (ADS)

    Zheng, Bob; Walder, Jean-Pierre; Lippe, Henrik vonder; Moses, William; Janecek, Martin

    Solid-state photomultiplier (SSPM) arrays are an interesting technology for use in PET detector modules due to their low cost, high compactness, insensitivity to magnetic fields, and sub-nanosecond timing resolution. However, the large intrinsic capacitance of SSPM arrays results in RC time constants that can severely degrade the response time, which leads to a trade-off between array size and speed. Instead, we propose a front-end that utilizes an adaptively biased current-to-current converter that minimizes the resistance seen by the SSPM array, thus preserving the timing resolution for both large and small arrays. This enables the use of large SSPM arrays with resistive networks, which creates position information and minimizes the number of outputs for compatibility with general PET multiplexing schemes. By tuning the bias of the feedback amplifier, the chip allows for precise control of the close-loop gain, ensuring stability and fast operation from loads as small as 50pF to loads as large as 1nF. The chip has 16 input channels, and 4 outputs capable of driving 100 n loads. The power consumption is 12mW per channel and 360mW for the entire chip. The chip has been designed and fabricated in an AMS 0.35um high-voltage technology, and demonstrates a fast rise-time response and low noise performances.

  6. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    PubMed Central

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-01-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems. PMID:19045902

  7. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  8. Structure and Process of Infrared Hot Electron Transistor Arrays

    PubMed Central

    Fu, Richard

    2012-01-01

    An infrared hot-electron transistor (IHET) 5 × 8 array with a common base configuration that allows two-terminal readout integration was investigated and fabricated for the first time. The IHET structure provides a maximum factor of six in improvement in the photocurrent to dark current ratio compared to the basic quantum well infrared photodetector (QWIP), and hence it improved the array S/N ratio by the same factor. The study also showed for the first time that there is no electrical cross-talk among individual detectors, even though they share the same emitter and base contacts. Thus, the IHET structure is compatible with existing electronic readout circuits for photoconductors in producing sensitive focal plane arrays. PMID:22778655

  9. Adaption of the Magnetometer Towed Array geophysical system to meet Department of Energy needs for hazardous waste site characterization

    SciTech Connect

    Cochran, J.R.; McDonald, J.R.; Russell, R.J.; Robertson, R.; Hensel, E.

    1995-10-01

    This report documents US Department of Energy (DOE)-funded activities that have adapted the US Navy`s Surface Towed Ordnance Locator System (STOLS) to meet DOE needs for a ``... better, faster, safer and cheaper ...`` system for characterizing inactive hazardous waste sites. These activities were undertaken by Sandia National Laboratories (Sandia), the Naval Research Laboratory, Geo-Centers Inc., New Mexico State University and others under the title of the Magnetometer Towed Array (MTA).

  10. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  11. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    PubMed

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  12. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  13. On the design of systolic-array architectures with applications to signal processing

    SciTech Connect

    Niamat, M.Y.

    1989-01-01

    Systolic arrays are networks of processors that rhythmically compute and paw data through systems. These arrays feature the important properties of modularity, regularity, local interconnections, and a high degree of pipelining and multiprocessing. In this dissertation, several systolic arrays are proposed with applications to real-time signal processing. Specifically, these arrays are designed for the rapid computation of position velocities, accelerations, and jerks associated with motion. Real-time computations of these parameters arise in many applications, notably in the areas of robotics, image-processing, remote signal processing, and computer-controlled machines. The systolic arrays proposed in this dissertation can be classified into the linear, the triangular, and the mesh connected types. In the linear category, six different systolic designs are presented. The relative merits of these designs are discussed in detail. It is found from the analysis of these designs that each of these arrays achieves a proportional increase in time. Also, by interleaving the input data items in some of these designs, the throughput rate is further doubled. This also increases the processor utilization rate to 100%. The triangular type systolic array is found to be useful when all three parameters are to be computed simultaneously, and the mesh type, when the number of signals to be processed are extremely large. The effect of direct broadcasting of data to the processing cells is also investigated. Finally, the utility of the proposed systolic arrays is illustrated by a practical design example.

  14. Adapting the Transtheoretical Model of Change to the Bereavement Process

    ERIC Educational Resources Information Center

    Calderwood, Kimberly A.

    2011-01-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…

  15. Behavioral training promotes multiple adaptive processes following acute hearing loss

    PubMed Central

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-01-01

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI: http://dx.doi.org/10.7554/eLife.12264.001 PMID:27008181

  16. Adapting physically complete models to vehicle-based EMI array sensor data: data inversion and discrimination studies

    NASA Astrophysics Data System (ADS)

    Shubitidze, Fridon; Miller, Jonathan S.; Schultz, Gregory M.; Marble, Jay A.

    2010-04-01

    This paper reports vehicle based electromagnetic induction (EMI) array sensor data inversion and discrimination results. Recent field studies show that EMI arrays, such as the Minelab Single Transmitter Multiple Receiver (STMR), and the Geophex GEM-5 EMI array, provide a fast and safe way to detect subsurface metallic targets such as landmines, unexploded ordnance (UXO) and buried explosives. The array sensors are flexible and easily adaptable for a variety of ground vehicles and mobile platforms, which makes them very attractive for safe and cost effective detection operations in many applications, including but not limited to explosive ordnance disposal and humanitarian UXO and demining missions. Most state-of-the-art EMI arrays measure the vertical or full vector field, or gradient tensor fields and utilize them for real-time threat detection based on threshold analysis. Real field practice shows that the threshold-level detection has high false alarms. One way to reduce these false alarms is to use EMI numerical techniques that are capable of inverting EMI array data in real time. In this work a physically complete model, known as the normalized volume/surface magnetic sources (NV/SMS) model is adapted to the vehicle-based EMI array, such as STMR and GEM-5, data. The NV/SMS model can be considered as a generalized volume or surface dipole model, which in a special limited case coincides with an infinitesimal dipole model approach. According to the NV/SMS model, an object's response to a sensor's primary field is modeled mathematically by a set of equivalent magnetic dipoles, distributed inside the object (i.e. NVMS) or over a surface surrounding the object (i.e. NSMS). The scattered magnetic field of the NSMS is identical to that produced by a set of interacting magnetic dipoles. The amplitudes of the magnetic dipoles are normalized to the primary magnetic field, relating induced magnetic dipole polarizability and the primary magnetic field. The magnitudes of

  17. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  18. Quantum state and process tomography via adaptive measurements

    NASA Astrophysics Data System (ADS)

    Wang, HengYan; Zheng, WenQiang; Yu, NengKun; Li, KeRen; Lu, DaWei; Xin, Tao; Li, Carson; Ji, ZhengFeng; Kribs, David; Zeng, Bei; Peng, XinHua; Du, JiangFeng

    2016-10-01

    We investigate quantum state tomography (QST) for pure states and quantum process tomography (QPT) for unitary channels via adaptive measurements. For a quantum system with a d-dimensional Hilbert space, we first propose an adaptive protocol where only 2 d - 1 measurement outcomes are used to accomplish the QST for all pure states. This idea is then extended to study QPT for unitary channels, where an adaptive unitary process tomography (AUPT) protocol of d 2+ d-1 measurement outcomes is constructed for any unitary channel. We experimentally implement the AUPT protocol in a 2-qubit nuclear magnetic resonance system. We examine the performance of the AUPT protocol when applied to Hadamard gate, T gate ( π/8 phase gate), and controlled-NOT gate, respectively, as these gates form the universal gate set for quantum information processing purpose. As a comparison, standard QPT is also implemented for each gate. Our experimental results show that the AUPT protocol that reconstructing unitary channels via adaptive measurements significantly reduce the number of experiments required by standard QPT without considerable loss of fidelity.

  19. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  20. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  1. Epidemic processes over adaptive state-dependent networks

    NASA Astrophysics Data System (ADS)

    Ogura, Masaki; Preciado, Victor M.

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.

  2. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  3. A solar array module fabrication process for HALE solar electric UAVs

    SciTech Connect

    Carey, P.G.; Aceves, R.C.; Colella, N.J.; Thompson, J.B.; Williams, K.A.

    1993-12-01

    We describe a fabrication process to manufacture high power to weight ratio flexible solar array modules for use on high altitude long endurance (HALE) solar electric unmanned air vehicles (UAVs). A span-loaded flying wing vehicle, known as the RAPTOR Pathfinder, is being employed as a flying test bed to expand the envelope of solar powered flight to high altitudes. It requires multiple light weight flexible solar array modules able to endure adverse environmental conditions. At high altitudes the solar UV flux is significantly enhanced relative to sea level, and extreme thermal variations occur. Our process involves first electrically interconnecting solar cells into an array followed by laminating them between top and bottom laminated layers into a solar array module. After careful evaluation of candidate polymers, fluoropolymer materials have been selected as the array laminate layers because of their inherent abilities to withstand the hostile conditions imposed by the environment.

  4. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  5. Thermodynamic Costs of Information Processing in Sensory Adaptation

    PubMed Central

    Sartori, Pablo; Granger, Léo; Lee, Chiu Fan; Horowitz, Jordan M.

    2014-01-01

    Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response. PMID:25503948

  6. Design, processing and testing of LSI arrays, hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.

    1979-01-01

    Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.

  7. Adaptive smart simulator for characterization and MPPT construction of PV array

    NASA Astrophysics Data System (ADS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  8. Steerable Space Fed Lens Array for Low-Cost Adaptive Ground Station Applications

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Popovic, Zoya; Rondineau, Sebastien; Miranda, Felix A.

    2007-01-01

    The Space Fed Lens Array (SFLA) is an alternative to a phased array antenna that replaces large numbers of expensive solid-state phase shifters with a single spatial feed network. SFLA can be used for multi-beam application where multiple independent beams can be generated simultaneously with a single antenna aperture. Unlike phased array antennas where feed loss increases with array size, feed loss in a lens array with more than 50 elements is nearly independent of the number of elements, a desirable feature for large apertures. In addition, SFLA has lower cost as compared to a phased array at the expense of total volume and complete beam continuity. For ground station applications, both of these tradeoff parameters are not important and can thus be exploited in order to lower the cost of the ground station. In this paper, we report the development and demonstration of a 952-element beam-steerable SFLA intended for use as a low cost ground station for communicating and tracking of a low Earth orbiting satellite. The dynamic beam steering is achieved through switching to different feed-positions of the SFLA via a beam controller.

  9. Adoption: biological and social processes linked to adaptation.

    PubMed

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood.

  10. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  11. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  12. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  13. Monolithic optical phased-array transceiver in a standard SOI CMOS process.

    PubMed

    Abediasl, Hooman; Hashemi, Hossein

    2015-03-01

    Monolithic microwave phased arrays are turning mainstream in automotive radars and high-speed wireless communications fulfilling Gordon Moores 1965 prophecy to this effect. Optical phased arrays enable imaging, lidar, display, sensing, and holography. Advancements in fabrication technology has led to monolithic nanophotonic phased arrays, albeit without independent phase and amplitude control ability, integration with electronic circuitry, or including receive and transmit functions. We report the first monolithic optical phased array transceiver with independent control of amplitude and phase for each element using electronic circuitry that is tightly integrated with the nanophotonic components on one substrate using a commercial foundry CMOS SOI process. The 8 × 8 phased array chip includes thermo-optical tunable phase shifters and attenuators, nano-photonic antennas, and dedicated control electronics realized using CMOS transistors. The complex chip includes over 300 distinct optical components and over 74,000 distinct electrical components achieving the highest level of integration for any electronic-photonic system.

  14. Monolithic optical phased-array transceiver in a standard SOI CMOS process.

    PubMed

    Abediasl, Hooman; Hashemi, Hossein

    2015-03-01

    Monolithic microwave phased arrays are turning mainstream in automotive radars and high-speed wireless communications fulfilling Gordon Moores 1965 prophecy to this effect. Optical phased arrays enable imaging, lidar, display, sensing, and holography. Advancements in fabrication technology has led to monolithic nanophotonic phased arrays, albeit without independent phase and amplitude control ability, integration with electronic circuitry, or including receive and transmit functions. We report the first monolithic optical phased array transceiver with independent control of amplitude and phase for each element using electronic circuitry that is tightly integrated with the nanophotonic components on one substrate using a commercial foundry CMOS SOI process. The 8 × 8 phased array chip includes thermo-optical tunable phase shifters and attenuators, nano-photonic antennas, and dedicated control electronics realized using CMOS transistors. The complex chip includes over 300 distinct optical components and over 74,000 distinct electrical components achieving the highest level of integration for any electronic-photonic system. PMID:25836869

  15. Post-processing of guided wave array data for high resolution pipe inspection.

    PubMed

    Velichko, Alexander; Wilcox, Paul D

    2009-12-01

    This paper describes a method for processing data from a guided wave transducer array on a pipe. The raw data set from such an array contains the full matrix of time-domain signals from each transmitter-receiver combination. It is shown that for certain configurations of an array, the total focusing method can be applied, which allows the array to be focused at every point on a pipe in both transmission and reception. The effect of array configuration parameters on the sensitivity of the proposed method to random and coherent noise is discussed. Experimental results are presented using electromagnetic acoustic transducers for exciting and detecting the S(0) Lamb wave mode in a 12-in. diameter steel pipe at 200 kHz excitation frequency. The results show that using the imaging algorithm, a 2-mm (0.08 wavelength) diameter half-thickness hole can be detected.

  16. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  17. Automatic ultrasonic imaging system with adaptive-learning-network signal-processing techniques

    SciTech Connect

    O'Brien, L.J.; Aravanis, N.A.; Gouge, J.R. Jr.; Mucciardi, A.N.; Lemon, D.K.; Skorpik, J.R.

    1982-04-01

    A conventional pulse-echo imaging system has been modified to operate with a linear ultrasonic array and associated digital electronics to collect data from a series of defects fabricated in aircraft quality steel blocks. A thorough analysis of the defect responses recorded with this modified system has shown that considerable improvements over conventional imaging approaches can be obtained in the crucial areas of defect detection and characterization. A combination of advanced signal processing concepts with the Adaptive Learning Network (ALN) methodology forms the basis for these improvements. Use of established signal processing algorithms such as temporal and spatial beam-forming in concert with a sophisticated detector has provided a reliable defect detection scheme which can be implemented in a microprocessor-based system to operate in an automatic mode.

  18. Adaptive processes drive ecomorphological convergent evolution in antwrens (Thamnophilidae).

    PubMed

    Bravo, Gustavo A; Remsen, J V; Brumfield, Robb T

    2014-10-01

    Phylogenetic niche conservatism (PNC) and convergence are contrasting evolutionary patterns that describe phenotypic similarity across independent lineages. Assessing whether and how adaptive processes give origin to these patterns represent a fundamental step toward understanding phenotypic evolution. Phylogenetic model-based approaches offer the opportunity not only to distinguish between PNC and convergence, but also to determine the extent that adaptive processes explain phenotypic similarity. The Myrmotherula complex in the Neotropical family Thamnophilidae is a polyphyletic group of sexually dimorphic small insectivorous forest birds that are relatively homogeneous in size and shape. Here, we integrate a comprehensive species-level molecular phylogeny of the Myrmotherula complex with morphometric and ecological data within a comparative framework to test whether phenotypic similarity is described by a pattern of PNC or convergence, and to identify evolutionary mechanisms underlying body size and shape evolution. We show that antwrens in the Myrmotherula complex represent distantly related clades that exhibit adaptive convergent evolution in body size and divergent evolution in body shape. Phenotypic similarity in the group is primarily driven by their tendency to converge toward smaller body sizes. Differences in body size and shape across lineages are associated to ecological and behavioral factors.

  19. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  20. Faraday-effect light-valve arrays for adaptive optical instruments

    SciTech Connect

    Hirleman, E.D.; Dellenback, P.A.

    1987-01-01

    The ability to adapt to a range of measurement conditions by autonomously configuring software or hardware on-line will be an important attribute of next-generation intelligent sensors. This paper reviews the characteristics of spatial light modulators (SLM) with an emphasis on potential integration into adaptive optical instruments. The paper focuses on one type of SLM, a magneto-optic device based on the Faraday effect. Finally, the integration of the Faraday-effect SLM into a laser-diffraction particle-sizing instrument giving it some ability to adapt to the measurement context is discussed.

  1. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  2. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  3. Model-based Processing of Microcantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Sinensky, A K; Lee, C L; Rudd, R E; Burnham, A K

    2005-04-27

    We have developed a model-based processor (MBP) for a microcantilever-array sensor to detect target species in solution. We perform a proof-of-concept experiment, fit model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest, averaged deflection data and multi-channel data. For this evaluation we extract model parameters via a model-based estimation, perform a Gauss-Markov simulation, design the optimal MBP and apply it to measured experimental data. The performance of the MBP in the multi-channel case is evaluated by comparison to a ''smoother'' (averager) typically used for microcantilever signal analysis. It is shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, apart from a correctable systematic bias error.

  4. Advanced techniques for array processing. Final report, 1 Mar 89-30 Apr 91

    SciTech Connect

    Friedlander, B.

    1991-05-30

    Array processing technology is expected to be a key element in communication systems designed for the crowded and hostile environment of the future battlefield. While advanced array processing techniques have been under development for some time, their practical use has been very limited. This project addressed some of the issues which need to be resolved for a successful transition of these promising techniques from theory into practice. The main problem which was studied was that of finding the directions of multiple co-channel transmitters from measurements collected by an antenna array. Two key issues related to high-resolution direction finding were addressed: effects of system calibration errors, and effects of correlation between the received signals due to multipath propagation. A number of useful theoretical performance analysis results were derived, and computationally efficient direction estimation algorithms were developed. These results include: self-calibration techniques for antenna arrays, sensitivity analysis for high-resolution direction finding, extensions of the root-MUSIC algorithm to arbitrary arrays and to arrays with polarization diversity, and new techniques for direction finding in the presence of multipath based on array interpolation. (Author)

  5. Precise calibration of a GNSS antenna array for adaptive beamforming applications.

    PubMed

    Daneshmand, Saeed; Sokhandan, Negin; Zaeri-Amirani, Mohammad; Lachapelle, Gérard

    2014-05-30

    The use of global navigation satellite system (GNSS) antenna arrays for applications such as interference counter-measure, attitude determination and signal-to-noise ratio (SNR) enhancement is attracting significant attention. However, precise antenna array calibration remains a major challenge. This paper proposes a new method for calibrating a GNSS antenna array using live signals and an inertial measurement unit (IMU). Moreover, a second method that employs the calibration results for the estimation of steering vectors is also proposed. These two methods are applied to the receiver in two modes, namely calibration and operation. In the calibration mode, a two-stage optimization for precise calibration is used; in the first stage, constant uncertainties are estimated while in the second stage, the dependency of each antenna element gain and phase patterns to the received signal direction of arrival (DOA) is considered for refined calibration. In the operation mode, a low-complexity iterative and fast-converging method is applied to estimate the satellite signal steering vectors using the calibration results. This makes the technique suitable for real-time applications employing a precisely calibrated antenna array. The proposed calibration method is applied to GPS signals to verify its applicability and assess its performance. Furthermore, the data set is used to evaluate the proposed iterative method in the receiver operation mode for two different applications, namely attitude determination and SNR enhancement.

  6. Precise calibration of a GNSS antenna array for adaptive beamforming applications.

    PubMed

    Daneshmand, Saeed; Sokhandan, Negin; Zaeri-Amirani, Mohammad; Lachapelle, Gérard

    2014-01-01

    The use of global navigation satellite system (GNSS) antenna arrays for applications such as interference counter-measure, attitude determination and signal-to-noise ratio (SNR) enhancement is attracting significant attention. However, precise antenna array calibration remains a major challenge. This paper proposes a new method for calibrating a GNSS antenna array using live signals and an inertial measurement unit (IMU). Moreover, a second method that employs the calibration results for the estimation of steering vectors is also proposed. These two methods are applied to the receiver in two modes, namely calibration and operation. In the calibration mode, a two-stage optimization for precise calibration is used; in the first stage, constant uncertainties are estimated while in the second stage, the dependency of each antenna element gain and phase patterns to the received signal direction of arrival (DOA) is considered for refined calibration. In the operation mode, a low-complexity iterative and fast-converging method is applied to estimate the satellite signal steering vectors using the calibration results. This makes the technique suitable for real-time applications employing a precisely calibrated antenna array. The proposed calibration method is applied to GPS signals to verify its applicability and assess its performance. Furthermore, the data set is used to evaluate the proposed iterative method in the receiver operation mode for two different applications, namely attitude determination and SNR enhancement. PMID:24887043

  7. Precise Calibration of a GNSS Antenna Array for Adaptive Beamforming Applications

    PubMed Central

    Daneshmand, Saeed; Sokhandan, Negin; Zaeri-Amirani, Mohammad; Lachapelle, Gérard

    2014-01-01

    The use of global navigation satellite system (GNSS) antenna arrays for applications such as interference counter-measure, attitude determination and signal-to-noise ratio (SNR) enhancement is attracting significant attention. However, precise antenna array calibration remains a major challenge. This paper proposes a new method for calibrating a GNSS antenna array using live signals and an inertial measurement unit (IMU). Moreover, a second method that employs the calibration results for the estimation of steering vectors is also proposed. These two methods are applied to the receiver in two modes, namely calibration and operation. In the calibration mode, a two-stage optimization for precise calibration is used; in the first stage, constant uncertainties are estimated while in the second stage, the dependency of each antenna element gain and phase patterns to the received signal direction of arrival (DOA) is considered for refined calibration. In the operation mode, a low-complexity iterative and fast-converging method is applied to estimate the satellite signal steering vectors using the calibration results. This makes the technique suitable for real-time applications employing a precisely calibrated antenna array. The proposed calibration method is applied to GPS signals to verify its applicability and assess its performance. Furthermore, the data set is used to evaluate the proposed iterative method in the receiver operation mode for two different applications, namely attitude determination and SNR enhancement. PMID:24887043

  8. Hybridization process for back-illuminated silicon Geiger-mode avalanche photodiode arrays

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Westhoff, Richard C.; Loomis, Andrew H.; Young, Douglas J.; Ciampi, Joseph S.; Aull, Brian F.; Reich, Robert K.

    2010-04-01

    We present a unique hybridization process that permits high-performance back-illuminated silicon Geiger-mode avalanche photodiodes (GM-APDs) to be bonded to custom CMOS readout integrated circuits (ROICs) - a hybridization approach that enables independent optimization of the GM-APD arrays and the ROICs. The process includes oxide bonding of silicon GM-APD arrays to a transparent support substrate followed by indium bump bonding of this layer to a signal-processing ROIC. This hybrid detector approach can be used to fabricate imagers with high-fill-factor pixels and enhanced quantum efficiency in the near infrared as well as large-pixel-count, small-pixel-pitch arrays with pixel-level signal processing. In addition, the oxide bonding is compatible with high-temperature processing steps that can be used to lower dark current and improve optical response in the ultraviolet.

  9. Adaptation of the Biolog Phenotype MicroArrayTM Technology to Profile the Obligate Anaerobe Geobacter metallireducens

    SciTech Connect

    Joyner, Dominique; Fortney, Julian; Chakraborty, Romy; Hazen, Terry

    2010-05-17

    The Biolog OmniLog? Phenotype MicroArray (PM) plate technology was successfully adapted to generate a select phenotypic profile of the strict anaerobe Geobacter metallireducens (G.m.). The profile generated for G.m. provides insight into the chemical sensitivity of the organism as well as some of its metabolic capabilities when grown with a basal medium containing acetate and Fe(III). The PM technology was developed for aerobic organisms. The reduction of a tetrazolium dye by the test organism represents metabolic activity on the array which is detected and measured by the OmniLog(R) system. We have previously adapted the technology for the anaerobic sulfate reducing bacterium Desulfovibrio vulgaris. In this work, we have taken the technology a step further by adapting it for the iron reducing obligate anaerobe Geobacter metallireducens. In an osmotic stress microarray it was determined that the organism has higher sensitivity to impermeable solutes 3-6percent KCl and 2-5percent NaNO3 that result in osmotic stress by osmosis to the cell than to permeable non-ionic solutes represented by 5-20percent ethylene glycol and 2-3percent urea. The osmotic stress microarray also includes an array of osmoprotectants and precursor molecules that were screened to identify substrates that would provide osmotic protection to NaCl stress. None of the substrates tested conferred resistance to elevated concentrations of salt. Verification studies in which G.m. was grown in defined medium amended with 100mM NaCl (MIC) and the common osmoprotectants betaine, glycine and proline supported the PM findings. Further verification was done by analysis of transcriptomic profiles of G.m. grown under 100mM NaCl stress that revealed up-regulation of genes related to degradation rather than accumulation of the above-mentioned osmoprotectants. The phenotypic profile, supported by additional analysis indicates that the accumulation of these osmoprotectants as a response to salt stress does not

  10. On Cognition, Structured Sequence Processing, and Adaptive Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Petersson, Karl Magnus

    2008-11-01

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.

  11. Adaptive ocean acoustic processing for a shallow ocean experiment

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1995-07-19

    A model-based approach is developed to solve an adaptive ocean acoustic signal processing problem. Here we investigate the design of model-based identifier (MBID) for a normal-mode model developed from a shallow water ocean experiment and then apply it to a set of experimental data demonstrating the feasibility of this approach. In this problem we show how the processor can be structured to estimate the horizontal wave numbers directly from measured pressure sound speed thereby eliminating the need for synthetic aperture processing or a propagation model solution. Ocean acoustic signal processing has made great strides over the past decade necessitated by the development of quieter submarines and the recent proliferation of diesel powered vessels.

  12. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    PubMed

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.

  13. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    PubMed

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics. PMID:12448345

  14. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  15. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  16. Outline of a multiple-access communication network based on adaptive arrays

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1982-01-01

    Attention is given to a narrow-band communication system consisting of a central station trying to receive signals simultaneously from K spatially distinct mobile users sharing the same frequencies. One example of such a system is a group of aircraft and ships transmitting messages to a communication satellite. A reasonable approach to such a multiple access system may be based on equipping the central station with an n-element antenna array where n is equal to or greater than K. The array employs K sets of n weights to segregate the signals received from the K users. The weights are determined by direct computation based on position information transmitted by the users. A description is presented of an improved technique which makes it possible to reduce significantly the number of required computer operations in comparison to currently known techniques.

  17. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  18. Polymer Solidification and Stabilization: Adaptable Processes for Atypical Wastes

    SciTech Connect

    Jensen, C.

    2007-07-01

    Vinyl Ester Styrene (VES) and Advanced Polymer Solidification (APS{sup TM}) processes are used to solidify, stabilize, and immobilize radioactive, pyrophoric and hazardous wastes at US Department of Energy (DOE) and Department of Defense (DOD) sites, and commercial nuclear facilities. A wide range of projects have been accomplished, including in situ immobilization of ion exchange resin and carbon filter media in decommissioned submarines; underwater solidification of zirconium and hafnium machining swarf; solidification of uranium chips; impregnation of depth filters; immobilization of mercury, lead and other hazardous wastes (including paint chips and blasting media); and in situ solidification of submerged demineralizers. Discussion of the adaptability of the VES and APS{sup TM} processes is timely, given the decommissioning work at government sites, and efforts by commercial nuclear plants to reduce inventories of one-of-a-kind wastes. The VES and APS{sup TM} media and processes are highly adaptable to a wide range of waste forms, including liquids, slurries, bead and granular media; as well as metal fines, particles and larger pieces. With the ability to solidify/stabilize liquid wastes using high-speed mixing; wet sludges and solids by low-speed mixing; or bead and granular materials through in situ processing, these polymer will produce a stable, rock-hard product that has the ability to sequester many hazardous waste components and create Class B and C stabilized waste forms for disposal. Technical assessment and approval of these solidification processes and final waste forms have been greatly simplified by exhaustive waste form testing, as well as multiple NRC and CRCPD waste form approvals. (authors)

  19. CD uniformity improvement of dense contact array in negative tone development process

    NASA Astrophysics Data System (ADS)

    Tsai, Fengnien; Yeh, Teng-hao; Yang, C. C.; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2015-03-01

    Layout pattern density impacts mask critical dimension uniformity (MCDU) as well as wafer critical dimension uniformity (WCDU) performances in some aspects. In patterning the dense contact array with negative tone development (NTD) process, the abrupt pattern density change around the array edge of a NTD clear tone reticle arises as a very challenging issue for achieving satisfactory WCDU. Around the array boundary, apart from the MCDU greatly impacted by the abrupt pattern density change, WCDU in lithographic process is also significantly influenced by the optical flare and chemical flare effects. This study investigates the pattern density effect induced MCDU and WCDU variations. Various pattern densities are generated by the combination of fixed array pattern and various sub-resolution assist feature (SRAF) extension regions for quantifying the separated WCD variation budget contributed by MCD variation, chemical flare effect and optical flare effect. With the proper pattern density modulation outside the array pattern on a clear tone reticle, MCD variation across array can be eliminated, optical flare and chemical flare effects induced WCD variation is also greatly suppressed.

  20. Critical Dimension Control for 32 nm Node Random Contact Hole Array Using Resist Reflow Process

    NASA Astrophysics Data System (ADS)

    Park, Joon-Min; Kang, Young-Min; Hong, Joo-Yoo; Oh, Hye-Keun

    2008-02-01

    A 50 nm contact hole (CH) random array fabricated by resist reflow process (RRP) was studied to produce 32 nm node devices. RRP is widely used for mass production of semiconductor devices, but RRP has some restrictions because the reflow strongly depends on the array, pitch, and shape of CH. Thus, we must have full knowledge on pattern dependency after RRP, and we need to have an optimum optical proximity corrected mask including RRP to compensate the pattern dependency in random array. To fabricate optimum optical proximity- and RRP-corrected mask, we must have a better understanding of how much resist flows and CH locations after RRP. A simulation is carried out to correctly predict the RRP result by including RRP parameters such as viscosity, adhesion force, surface tension, and location of CH. As a result, we obtained uniform 50 nm CH patterns even for the random and differently shaped CH arrays by optical proximity-corrected RRP.

  1. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    Imaging non-uniformity of infrared focal plane array (IRFPA) behaves as fixed-pattern noise superimposed on the image, which affects the imaging quality of infrared system seriously. In scene-based non-uniformity correction methods, the drawbacks of ghosting artifacts and image blurring affect the sensitivity of the IRFPA imaging system seriously and decrease the image quality visibly. This paper proposes an improved neural network non-uniformity correction method with adaptive learning rate. On the one hand, using guided filter, the proposed algorithm decreases the effect of ghosting artifacts. On the other hand, due to the inappropriate learning rate is the main reason of image blurring, the proposed algorithm utilizes an adaptive learning rate with a temporal domain factor to eliminate the effect of image blurring. In short, the proposed algorithm combines the merits of the guided filter and the adaptive learning rate. Several real and simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. The experiment results indicate that the proposed algorithm can not only reduce the non-uniformity with less ghosting artifacts but also overcome the problems of image blurring in static areas.

  2. Design, processing and testing of LSI arrays: Hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.

    1979-01-01

    Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.

  3. Adaptation as process: the future of Darwinism and the legacy of Theodosius Dobzhansky.

    PubMed

    Depew, David J

    2011-03-01

    Conceptions of adaptation have varied in the history of genetic Darwinism depending on whether what is taken to be focal is the process of adaptation, adapted states of populations, or discrete adaptations in individual organisms. I argue that Theodosius Dobzhansky's view of adaptation as a dynamical process contrasts with so-called "adaptationist" views of natural selection figured as "design-without-a-designer" of relatively discrete, enumerable adaptations. Correlated with these respectively process and product oriented approaches to adaptive natural selection are divergent pictures of organisms themselves as developmental wholes or as "bundles" of adaptations. While even process versions of genetical Darwinism are insufficiently sensitive to the fact much of the variation on which adaptive selection works consists of changes in the timing, rate, or location of ontogenetic events, I argue that articulations of the Modern Synthesis influenced by Dobzhansky are more easily reconciled with the recent shift to evolutionary developmentalism than are versions that make discrete adaptations central.

  4. Process development for cell aggregate arrays encapsulated in a synthetic hydrogel using negative dielectrophoresis.

    PubMed

    Abdallat, Rula G; Ahmad Tajuddin, Aziela S; Gould, David H; Hughes, Michael P; Fatoyinbo, Henry O; Labeed, Fatima H

    2013-04-01

    Spatial patterning of cells is of great importance in tissue engineering and biotechnology, enabling, for example the creation of bottom-up histoarchitectures of heterogeneous cells, or cell aggregates for in vitro high-throughput toxicological and therapeutic studies within 3D microenvironments. In this paper, a single-step process for creating peelable and resilient hydrogels, encapsulating arrays of biological cell aggregates formed by negative DEP has been devised. The dielectrophoretic trapping within low-energy regions of the DEP-dot array reduces cell exposure to high field stresses while creating distinguishable, evenly spaced arrays of aggregates. In addition to using an optimal combination of PEG diacrylate pre-polymer solution concentration and a novel UV exposure mechanism, total processing time was reduced. With a continuous phase medium of PEG diacrylate at 15% v/v concentration, effective dielectrophoretic cell patterned arrays and photo-polymerisation of the mixture was achieved within a 4 min period. This unique single-step process was achieved using a 30 s UV exposure time frame within a dedicated, wide exposure area DEP light box system. To demonstrate the developed process, aggregates of yeast, human leukemic (K562) and HeLa cells were immobilised in an array format within the hydrogel. Relative cell viability for both cells within the hydrogels, after maintaining them in appropriate iso-osmotic media, over a week period was greater than 90%. PMID:23436271

  5. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  6. True-time-delay transmit/receive optical beam-forming system for phased arrays and other signal processing applications

    NASA Astrophysics Data System (ADS)

    Toughlian, Edward N.; Zamuda, H.; Carter, Charity A.

    1994-06-01

    This paper addresses the problem of dynamic optical processing for the control of phased array antennas. The significant result presented is the demonstration of a continuously variable photonic RF/microwave delay line. Specifically, it is shown that by applying spatial frequency dependent optical phase compensation in an optical heterodyne process, variable RF delay can be achieved over a prescribed frequency band. Experimental results which demonstrate the performance of the delay line with regard to both maximum delay and resolution over a broad bandwidth are presented. Additionally, a spatially integrated optical system is proposed for control of phased array antennas. The integrated system provides mechanical stability, essentially eliminates the drift problems associated with free space optical systems, and can provide high packing density. This approach uses a class of spatial light modulator known as a deformable mirror device and leads to a steerable arbitrary antenna radiation pattern of the true time delay type. Also considered is the ability to utilize the delay line as a general photonic signal processing element in an adaptive (reconfigurable) transversal frequency filter configuration. Such systems are widely applicable in jammer/noise canceling systems, broadband ISDN, spread spectrum secure communications and the like.

  7. a Post-Processing Technique for Guided Wave Array Data for the Inspection of Plate Structures

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2008-02-01

    The paper describes a general approach for processing data from a guided wave transducer array on a plate-like structure. It is shown that improvements in resolution are obtained at the expense of sensitivity to noise. A method of quantifying this sensitivity is presented. Experimental data obtained from a guided wave array containing electromagnetic acoustic transducers (EMAT) elements for exciting and detecting the S0 Lamb wave mode in a 5-mm thick aluminium plate are processed with different algorithms and the results are discussed. Generalization of the technique for the case of multimode media is suggested.

  8. A comparison of deghosting techniques in adaptive nonuniformity correction for IR focal-plane array systems

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Diani, Marco; Corsini, Giovanni

    2010-10-01

    Focal-plane array (FPA) IR systems are affected by fixed-pattern noise (FPN) which is caused by the nonuniformity of the responses of the detectors that compose the array. Due to the slow temporal drift of FPN, several scene-based nonuniformity correction (NUC) techniques have been developed that operate calibration during the acquisition only by means of the collected data. Unfortunately, such algorithms are affected by a collateral damaging problem: ghosting-like artifacts are generated by the edges in the scene and appear as a reverse image in the original position. In this paper, we compare the performance of representative methods for reducing ghosting. Such methods relate to the least mean square (LMS)-based NUC algorithm proposed by D.A. Scribner. In particular, attention is focused on a recently proposed technique which is based on the computation of the temporal statistics of the error signal in the aforementioned LMS-NUC algorithm. In this work, the performances of the deghosting techniques have been investigated by means of IR data corrupted with simulated nonuniformity noise over the detectors of the FPA. Finally, we have made some considerations on the computational aspect which is a challenging task for the employment of such techniques in real-time systems.

  9. Critical dimension control for 32 nm random contact hole array with resist reflow process

    NASA Astrophysics Data System (ADS)

    Park, Joon-Min; Kang, Young-Min; Park, Seung-Wook; Hong, Joo-Yoo; Oh, Hye-Keun

    2007-10-01

    50 nm random contact hole array by resist reflow process (RRP) was studied to make 32 nm node device. Patterning of smaller contact hole array is harder than patterning the line and space. RRP has a lot of advantages, but RRP strongly depends on pattern array, pitch, and shape. Thus, we must have full knowledge for pattern dependency after RRP, and then we need to have optimum optical proximity corrected mask including RRP to compensate the pattern dependency in random array. To make optimum optical proximity and RRP corrected mask, we must have better understanding that how much resist flows and where the contact hole locations are after RRP. A simulation is made to correctly predict RRP result by including the RRP parameters such as viscosity, adhesion force, surface tension and location of the contact hole. As a result, we made uniform 50 nm contact hole patterns even for the random contact hole array and for different shaped contact hole array by optical proximity corrected RRP.

  10. Adaptive random renormalization group classification of multiscale dispersive processes

    NASA Astrophysics Data System (ADS)

    Cushman, John; O'Malley, Dan

    2013-04-01

    Renormalization group operators provide a detailed classification tool for dispersive processes. We begin by reviewing a two-scale renormalization group classification scheme. Repeated application of one operator is associated with long time behavior of the process while repeated application of the other is associated with short time behavior. This approach is shown to be robust even in the presence of non-stationary increments and/or infinite second moments. Fixed points of the operators can be used for further sub classification of the processes when appropriate limits exist. As an example we look at advective dispersion in an ergodic velocity field. Let X(t) be a fixed point of the long-time renormalization group operator (RGO) RX(t)=X(rt)/r^p. Scaling laws for the probability density, mean first passage times, and finite-size Lyapunov exponents of such fixed points are reviewed in anticipation of more general results. A generalized RGO, Rp, where the exponent in R above is now a random variable is introduced. Scaling laws associated with these random RGOs (RRGOs) are demonstrated numerically and applied to a process modeling the transition from sub-dispersion to Fickian dispersion. The scaling laws for the RRGO are not simple power laws, but instead are a weighted average of power laws. The weighting in the scaling laws can be determined adaptively via Bayes' theorem.

  11. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  12. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  13. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  14. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  15. Adaptive lenticular microlens array based on voltage-induced waves at the surface of polyvinyl chloride/dibutyl phthalate gels.

    PubMed

    Xu, Miao; Jin, Boya; He, Rui; Ren, Hongwen

    2016-04-18

    We report a new approach to preparing a lenticular microlens array (LMA) using polyvinyl chloride (PVC)/dibutyl phthalate (DBP) gels. The PVD/DBP gels coated on a glass substrate form a membrane. With the aid of electrostatic repulsive force, the surface of the membrane can be reconfigured with sinusoidal waves by a DC voltage. The membrane with wavy surface functions as a LMA. By switching over the anode and cathode, the convex shape of each lenticular microlens in the array can be converted to the concave shape. Therefore, the LMA can present a large dynamic range. The response time is relatively fast and the driving voltage is low. With the advantages of compact structure, optical isotropy, and good mechanical stability, our LMA has potential applications in imaging, information processing, biometrics, and displays. PMID:27137253

  16. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  17. Optimized mirror supports, active primary mirrors and adaptive secondaries for the Optical Very Large Array (OVLA)

    NASA Astrophysics Data System (ADS)

    Arnold, Luc

    1994-06-01

    This article first deals with general aspects of optimizing mirror supports. A wide variety of support topologies have been optimized by Nelson et al for unobscured entrance pupils. Optical forces and locations of point supports have been calculated here for annular pupils. Efficient topologies introducing a small amount of defocusing are also proposed for unobscured and annular pupils. Support efficiencies are given for each topology. Wavefront errors are estimated in the case of a defective cell, in order to specify tolerances on forces and geometries. The OVLA active optics is then discussed. The very thin, meniscus-shaped primary will be actively supported by 29 actuators and 3 fixed points. Actuator locations and forces have been calculated to minimize the mirror deflection under its own weight but also to allow a good control of astigmatism. We finally present a study of a concave adaptive secondary for the OVLA telescopes. As an initial result, we propose a defocus adaptive corrector with a variable thickness distribution. Conditions of use are defined, and performances are evaluated.

  18. Effects of process parameters on the molding quality of the micro-needle array

    NASA Astrophysics Data System (ADS)

    Qiu, Z. J.; Ma, Z.; Gao, S.

    2016-07-01

    Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.

  19. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  20. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  1. Field programmable gate array processing for an improved low-light-level imaging system with higher detection sensibility

    NASA Astrophysics Data System (ADS)

    Tang, Hongying; Yu, Zhengtao

    2014-05-01

    The method which employs the frame accumulation and shaped function is effective in low-light-level imaging. However, it has drawbacks of lower imaging speed and complex operation. To optimize the method, we provide the design of an improved low-light-level imaging system with higher detection sensibility. The design is developed specifically for a faster imaging speed based on field programmable gate arrays. It features the use of least-square algorithm and a saw-tooth wave varied light applied to the image sensor. By manipulation of the video signal in synchronous dynamic random access memory, a low-light-level image which was previously undetectable can be estimated. The design simplifies the imaging process and doubles the imaging speed, and makes the system adapted to long range imaging.

  2. Assembly, integration, and verification (AIV) in ALMA: series processing of array elements

    NASA Astrophysics Data System (ADS)

    Lopez, Bernhard; Jager, Rieks; Whyborn, Nicholas D.; Knee, Lewis B. G.; McMullin, Joseph P.

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA will consist of at least 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. It is the responsibility of ALMA AIV to deliver the fully assembled, integrated, and verified antennas (array elements) to the telescope array. After an initial phase of infrastructure setup AIV activities began when the first ALMA antenna and subsystems became available in mid 2008. During the second semester of 2009 a project-wide effort was made to put in operation a first 3- antenna interferometer at the Array Operations Site (AOS). In 2010 the AIV focus was the transition from event-driven activities towards routine series production. Also, due to the ramp-up of operations activities, AIV underwent an organizational change from an autonomous department into a project within a strong matrix management structure. When the subsystem deliveries stabilized in early 2011, steady-state series processing could be achieved in an efficient and reliable manner. The challenge today is to maintain this production pace until completion towards the end of 2013. This paper describes the way ALMA AIV evolved successfully from the initial phase to the present steady-state of array element series processing. It elaborates on the different project phases and their relationships, presents processing statistics, illustrates the lessons learned and relevant best practices, and concludes with an outlook of the path towards completion.

  3. An eigenvector-based test for local stationarity applied to array processing.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2014-06-01

    In sonar array processing, a challenging problem is the estimation of the data covariance matrix in the presence of moving targets in the water column, since the time interval of data local stationarity is limited. This work describes an eigenvector-based method for proper data segmentation into intervals that exhibit local stationarity, providing data-driven higher bounds for the number of snapshots available for computation of time-varying sample covariance matrices. Application of the test is illustrated with simulated data in a horizontal array for the detection of a quiet source in the presence of a loud interferer.

  4. Recasting Hope: a process of adaptation following fetal anomaly diagnosis.

    PubMed

    Lalor, Joan; Begley, Cecily M; Galavan, Eoin

    2009-02-01

    Recent decades have seen ultrasound revolutionise the management of pregnancy and its possible complications. However, somewhat less consideration has been given to the psychosocial consequences of mass screening resulting in fetal anomaly detection in low-risk populations, particularly in contexts where termination of pregnancy services are not readily accessible. A grounded theory study was conducted exploring forty-one women's experiences of ultrasound diagnosis of fetal abnormality up to and beyond the birth in the Republic of Ireland. Thirty-one women chose to continue the pregnancy and ten women accessed termination of pregnancy services outside the state. Data were collected using repeated in-depth individual interviews pre- and post-birth and analysed using the constant comparative method. Recasting Hope, the process of adaptation following diagnosis is represented temporally as four phases: 'Assume Normal', 'Shock', 'Gaining Meaning' and 'Rebuilding'. Some mothers expressed a sense of incredulity when informed of the anomaly and the 'Assume Normal' phase provides an improved understanding as to why women remain unprepared for an adverse diagnosis. Transition to phase 2, 'Shock,' is characterised by receiving the diagnosis and makes explicit women's initial reactions. Once the diagnosis is confirmed, a process of 'Gaining Meaning' commences, whereby an attempt to make sense of this ostensibly negative event begins. 'Rebuilding', the final stage in the process, is concerned with the extent to which women recover from the loss and resolve the inconsistency between their experience and their previous expectations of pregnancy in particular and beliefs in the world in general. This theory contributes to the theoretical field of thanatology as applied to the process of grieving associated with the loss of an ideal child. The framework of Recasting Hope is intended for use as a tool to assist health professionals through offering simple yet effective

  5. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  6. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only.

    PubMed

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F(2)/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  7. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    NASA Astrophysics Data System (ADS)

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-09-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer.

  8. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    PubMed Central

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  9. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    NASA Astrophysics Data System (ADS)

    Warburton, W. K.; Harris, J. T.; Friedrich, S.

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100-2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays - currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I-V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  10. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  11. MagicPlate-512: A 2D silicon detector array for quality assurance of stereotactic motion adaptive radiotherapy

    SciTech Connect

    Petasecca, M. Newall, M. K.; Aldosari, A. H.; Fuduli, I.; Espinoza, A. A.; Porumb, C. S.; Guatelli, S.; Metcalfe, P.; Lerch, M. L. F.; Rosenfeld, A. B.; Booth, J. T.; Colvill, E.; Duncan, M.; Cammarano, D.; Carolan, M.; Oborn, B.; Perevertaylo, V.; Keall, P. J.

    2015-06-15

    Purpose: Spatial and temporal resolutions are two of the most important features for quality assurance instrumentation of motion adaptive radiotherapy modalities. The goal of this work is to characterize the performance of the 2D high spatial resolution monolithic silicon diode array named “MagicPlate-512” for quality assurance of stereotactic body radiation therapy (SBRT) and stereotactic radiosurgery (SRS) combined with a dynamic multileaf collimator (MLC) tracking technique for motion compensation. Methods: MagicPlate-512 is used in combination with the movable platform HexaMotion and a research version of radiofrequency tracking system Calypso driving MLC tracking software. The authors reconstruct 2D dose distributions of small field square beams in three modalities: in static conditions, mimicking the temporal movement pattern of a lung tumor and tracking the moving target while the MLC compensates almost instantaneously for the tumor displacement. Use of Calypso in combination with MagicPlate-512 requires a proper radiofrequency interference shielding. Impact of the shielding on dosimetry has been simulated by GEANT4 and verified experimentally. Temporal and spatial resolutions of the dosimetry system allow also for accurate verification of segments of complex stereotactic radiotherapy plans with identification of the instant and location where a certain dose is delivered. This feature allows for retrospective temporal reconstruction of the delivery process and easy identification of error in the tracking or the multileaf collimator driving systems. A sliding MLC wedge combined with the lung motion pattern has been measured. The ability of the MagicPlate-512 (MP512) in 2D dose mapping in all three modes of operation was benchmarked by EBT3 film. Results: Full width at half maximum and penumbra of the moving and stationary dose profiles measured by EBT3 film and MagicPlate-512 confirm that motion has a significant impact on the dose distribution. Motion

  12. Sensor fusion with on-line gas emission multisensor arrays and standard process measuring devices in baker's yeast manufacturing process.

    PubMed

    Mandenius, C F; Eklöv, T; Lundström, I

    1997-07-20

    The use of a multisensor array for measuring the emission from a production-scale baker's yeast manufacturing process is reported. The sensor array, containing 14 different gas-sensitive semiconductor devices and an infrared gas sensor, was used to monitor the gas emission from a yeast culture bioreactor during fed-batch operation. The signal pattern from the sensors was evaluated in relation to two key process variables, the cell mass and the ethanol concentrations. Fusion with the on-line sensor signals for reactor weight and aeration rate made it possible to estimate cell mass and ethanol concentration using computation with backpropagating artificial neural nets. Identification of process states with the same fusion of sensor signals was realized using principal component analysis. (c) 1997 John Wiley & Sons, Inc. Biotechnol Bioeng 55: 427-438, 1997.

  13. Hollow polymer microneedle array fabricated by photolithography process combined with micromolding technique.

    PubMed

    Wang, Po-Chun; Wester, Brock A; Rajaraman, Swaminathan; Paik, Seung-Joon; Kim, Seong-Hyok; Allen, Mark G

    2009-01-01

    Transdermal drug delivery through microneedles is a minimally invasive procedure causing little or no pain, and is a potentially attractive alternative to intramuscular and subdermal drug delivery methods. This paper demonstrates the fabrication of a hollow microneedle array using a polymer-based process combining UV photolithography and replica molding techniques. The key characteristic of the proposed fabrication process is to define a hollow lumen for microfluidic access via photopatterning, allowing a batch process as well as high throughput. A hollow SU-8 microneedle array, consisting of 825mum tall and 400 mum wide microneedles with 15-25 mum tip diameters and 120 mum diameter hollow lumens was designed, fabricated and characterized. PMID:19964192

  14. Extension of DAMAS Phased Array Processing for Spatial Coherence Determination (DAMAS-C)

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M., Jr.

    2006-01-01

    The present study reports a new development of the DAMAS microphone phased array processing methodology that allows the determination and separation of coherent and incoherent noise source distributions. In 2004, a Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decoupled the array design and processing influence from the noise being measured, using a simple and robust algorithm. In 2005, three-dimensional applications of DAMAS were examined. DAMAS has been shown to render an unambiguous quantitative determination of acoustic source position and strength. However, an underlying premise of DAMAS, as well as that of classical array beamforming methodology, is that the noise regions under study are distributions of statistically independent sources. The present development, called DAMAS-C, extends the basic approach to include coherence definition between noise sources. The solutions incorporate cross-beamforming array measurements over the survey region. While the resulting inverse problem can be large and the iteration solution computationally demanding, it solves problems no other technique can approach. DAMAS-C is validated using noise source simulations and is applied to airframe flap noise test results.

  15. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  16. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    PubMed Central

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  17. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  18. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-07-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  19. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  20. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  1. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  2. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  3. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  4. High-performance liquid chromatography with diode-array detection cotinine method adapted for the assessment of tobacco smoke exposure.

    PubMed

    Bartolomé, Mónica; Gallego-Picó, Alejandrina; Huetos, Olga; Castaño, Argelia

    2014-06-01

    Smoking is considered to be one of the main risk factors for cancer and other diseases and is the second leading cause of death worldwide. As the anti-tobacco legislation implemented in Europe has reduced secondhand smoke exposure levels, analytical methods must be adapted to these new levels. Recent research has demonstrated that cotinine is the best overall discriminator when biomarkers are used to determine whether a person has ongoing exposure to tobacco smoke. This work proposes a sensitive, simple and low-cost method based on solid-phase extraction and liquid chromatography with diode array detection for the assessment of tobacco smoke exposure by cotinine determination in urine. The analytical procedure is simple and fast (20 min) when compared to other similar methods existing in the literature, and it is cheaper than the mass spectrometry techniques usually used to quantify levels in nonsmokers. We obtained a quantification limit of 12.30 μg/L and a recovery of over 90%. The linearity ranges used were 12-250 and 250-4000 μg/L. The method was successfully used to determine cotinine in urine samples collected from different volunteers and is clearly an alternative routine method that allows active and passive smokers to be distinguished.

  5. Lightweight solar array blanket tooling, laser welding and cover process technology. Final Report

    SciTech Connect

    Dillard, P.A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  6. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  7. Analysis of dynamic deformation processes with adaptive KALMAN-filtering

    NASA Astrophysics Data System (ADS)

    Eichhorn, Andreas

    2007-05-01

    In this paper the approach of a full system analysis is shown quantifying a dynamic structural ("white-box"-) model for the calculation of thermal deformations of bar-shaped machine elements. The task was motivated from mechanical engineering searching new methods for the precise prediction and computational compensation of thermal influences in the heating and cooling phases of machine tools (i.e. robot arms, etc.). The quantification of thermal deformations under variable dynamic loads requires the modelling of the non-stationary spatial temperature distribution inside the object. Based upon FOURIERS law of heat flow the high-grade non-linear temperature gradient is represented by a system of partial differential equations within the framework of a dynamic Finite Element topology. It is shown that adaptive KALMAN-filtering is suitable to quantify relevant disturbance influences and to identify thermal parameters (i.e. thermal diffusivity) with a deviation of only 0,2%. As result an identified (and verified) parametric model for the realistic prediction respectively simulation of dynamic temperature processes is presented. Classifying the thermal bend as the main deformation quantity of bar-shaped machine tools, the temperature model is extended to a temperature deformation model. In lab tests thermal load steps are applied to an aluminum column. Independent control measurements show that the identified model can be used to predict the columns bend with a mean deviation (r.m.s.) smaller than 10 mgon. These results show that the deformation model is a precise predictor and suitable for realistic simulations of thermal deformations. Experiments with modified heat sources will be necessary to verify the model in further frequency spectra of dynamic thermal loads.

  8. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    PubMed Central

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  9. Monitoring and evaluation of alcoholic fermentation processes using a chemocapacitor sensor array.

    PubMed

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-09-02

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument.

  10. Patellar tendinosis as an adaptive process: a new hypothesis

    PubMed Central

    Hamilton, B; Purdam, C

    2004-01-01

    Background: Patellar tendinosis (PT), or "jumper's knee" is a common condition in athletes participating in jumping sports, and is characterised by proximal patellar tendon pain and focal tenderness to palpation. Hypoechoic lesions observed in the proximal patellar tendon associated with the tendinosis are typically described as being a result of degenerative change or "failed healing". We propose a new model for the development of the hypoechoic lesion observed in PT, in which the aetiology is an adaptive response to differential forces within the tendon. Methods: We assessed the clinical, histopathological, and biomechanical literature surrounding the patellar tendon and integrated this with research into the response of tendons to differential forces. Results and conclusions: We propose that the hypoechoic lesion commonly described in PT is the result of adaptation or partial adaptation of the proximal patellar tendon to a compressive load. We postulate that the biomechanics of the patellar–patellar tendon interface creates this compressive environment. Secondary failure of the surrounding tensile adapted tendon tissue may result in tissue overload and failure, with resultant stimulation of nociceptors. We believe that this "adaptive model" of patellar tendinosis is consistent with the clinical and histological findings. PMID:15562176

  11. Time in Redox Adaptation Processes: From Evolution to Hormesis

    PubMed Central

    Sthijns, Mireille M. J. P. E.; Weseler, Antje R.; Bast, Aalt; Haenen, Guido R. M. M.

    2016-01-01

    Life on Earth has to adapt to the ever changing environment. For example, due to introduction of oxygen in the atmosphere, an antioxidant network evolved to cope with the exposure to oxygen. The adaptive mechanisms of the antioxidant network, specifically the glutathione (GSH) system, are reviewed with a special focus on the time. The quickest adaptive response to oxidative stress is direct enzyme modification, increasing the GSH levels or activating the GSH-dependent protective enzymes. After several hours, a hormetic response is seen at the transcriptional level by up-regulating Nrf2-mediated expression of enzymes involved in GSH synthesis. In the long run, adaptations occur at the epigenetic and genomic level; for example, the ability to synthesize GSH by phototrophic bacteria. Apparently, in an adaptive hormetic response not only the dose or the compound, but also time, should be considered. This is essential for targeted interventions aimed to prevent diseases by successfully coping with changes in the environment e.g., oxidative stress. PMID:27690013

  12. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  13. Liquid-crystalline processing of highly oriented carbon nanotube arrays for thin-film transistors.

    PubMed

    Ko, Hyunhyub; Tsukruk, Vladimir V

    2006-07-01

    We introduce a simple solution-based method for the fabrication of highly oriented carbon nanotube (CNT) arrays to be used for thin-film transistors. We exploit the liquid-crystalline behavior of a CNT solution near the receding contact line during tilted-drop casting and produced long-range nematic-like ordering of carbon nanotube stripes caused by confined micropatterned geometry. We further demonstrate that the performance of thin-film transistors based on these densely packed and uniformly oriented CNT arrays is largely improved compared to random CNTs. This approach has great potential in low-cost, large-scale processing of high-performance electronic devices based on high-density oriented CNT films with record electrical characteristics such as high conductance, low resistivity, and high career mobility.

  14. Process development for automated solar cell and module production. Task 4: automated array assembly

    SciTech Connect

    Hagerty, J.J.

    1980-06-30

    The scope of work under this contract involves specifying a process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use. This process sequence is then critically analyzed from a technical and economic standpoint to determine the technological readiness of each process step for implementation. The process steps are ranked according to the degree of development effort required and according to their significance to the overall process. Under this contract the steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development. Economic analysis using the SAMICS system has been performed during these studies to assure that development efforts have been directed towards the ultimate goal of price reduction. Details are given. (WHK)

  15. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    NASA Astrophysics Data System (ADS)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  16. Regional and Foreign Accent Processing in English: Can Listeners Adapt?

    ERIC Educational Resources Information Center

    Floccia, Caroline; Butler, Joseph; Goslin, Jeremy; Ellis, Lucy

    2009-01-01

    Recent data suggest that the first presentation of a foreign accent triggers a delay in word identification, followed by a subsequent adaptation. This study examines under what conditions the delay resumes to baseline level. The delay will be experimentally induced by the presentation of sentences spoken to listeners in a foreign or a regional…

  17. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  18. Neural adaptation and behavioral measures of temporal processing and speech perception in cochlear implant recipients.

    PubMed

    Zhang, Fawen; Benson, Chelsea; Murphy, Dora; Boian, Melissa; Scott, Michael; Keith, Robert; Xiang, Jing; Abbas, Paul

    2013-01-01

    The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the

  19. Neural Adaptation and Behavioral Measures of Temporal Processing and Speech Perception in Cochlear Implant Recipients

    PubMed Central

    Zhang, Fawen; Benson, Chelsea; Murphy, Dora; Boian, Melissa; Scott, Michael; Keith, Robert; Xiang, Jing; Abbas, Paul

    2013-01-01

    The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the

  20. Negotiating uncertainty: the transitional process of adapting to life with HIV.

    PubMed

    Perrett, Stephanie E; Biley, Francis C

    2013-01-01

    Glaser's (1978) grounded-theory method was used to investigate the transitional process of adapting to life with HIV. Semistructured interviews took place with 8 male HIV-infected participants recruited from a clinic in South Wales, United Kingdom. Data analysis used open, substantive, and theoretical coding. Adapting to a life with HIV infection emerged as a process of adapting to uncertainty with "negotiating uncertainty" as a core concept. Seven subcategories represented movements between bipolar opposites labeled "anticipating hopelessness" and "regaining optimism." This work progresses the theoretical concepts of transitions, uncertainty, and adaptation in relation to the HIV experience.

  1. Scalable processing and capacity of Si microwire array anodes for Li ion batteries

    PubMed Central

    2014-01-01

    Si microwire array anodes have been prepared by an economical, microelectronics compatible method based on macropore etching. In the present report, evidence of the scalability of the process and the areal capacity of the anodes is presented. The anodes exhibit record areal capacities for Si-based anodes. The gravimetric capacity of longer anodes is comparable to the one of shorter anodes at moderate lithiation/delithiation rates. The diffusion limitation of the lithium ions through the electrolyte in depth among the wires is the limiting factor for cycling longer wires at high rates. PACS 82.47.Aa; 82.45.Vp; 81.16.-c PMID:25177226

  2. Scalable processing and capacity of Si microwire array anodes for Li ion batteries

    NASA Astrophysics Data System (ADS)

    Quiroga-González, Enrique; Carstensen, Jürgen; Föll, Helmut

    2014-08-01

    Si microwire array anodes have been prepared by an economical, microelectronics compatible method based on macropore etching. In the present report, evidence of the scalability of the process and the areal capacity of the anodes is presented. The anodes exhibit record areal capacities for Si-based anodes. The gravimetric capacity of longer anodes is comparable to the one of shorter anodes at moderate lithiation/delithiation rates. The diffusion limitation of the lithium ions through the electrolyte in depth among the wires is the limiting factor for cycling longer wires at high rates.

  3. Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.

  4. Evaluation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Array

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula

    2006-01-01

    The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.

  5. An Evaluation of Signal Processing Tools for Improving Phased Array Ultrasonic Weld Inspection

    SciTech Connect

    Ramuhalli, Pradeep; Cinson, Anthony D.; Crawford, Susan L.; Harris, Robert V.; Diaz, Aaron A.; Anderson, Michael T.

    2011-03-24

    Cast austenitic stainless steel (CASS) commonly used in U.S. nuclear power plants is a coarse-grained, elastically anisotropic material. The coarse-grained nature of CASS makes ultrasonic inspection of in-service components difficult. Recently, low-frequency phased array ultrasound has emerged as a candidate for the CASS piping weld inspection. However, issues such as low signal-to-noise ratio and difficulty in discriminating between flaw and non-flaw signals remain. This paper discusses the evaluation of a number of signal processing algorithms for improving flaw detection in CASS materials. The full paper provides details of the algorithms being evaluated, along with preliminary results.

  6. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    PubMed Central

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms. PMID:22016625

  7. The Role of Water Vapor and Dissociative Recombination Processes in Solar Array Arc Initiation

    NASA Technical Reports Server (NTRS)

    Galofar, J.; Vayner, B.; Degroot, W.; Ferguson, D.

    2002-01-01

    Experimental plasma arc investigations involving the onset of arc initiation for a negatively biased solar array immersed in low-density plasma have been performed. Previous studies into the arc initiation process have shown that the most probable arcing sites tend to occur at the triple junction involving the conductor, dielectric and plasma. More recently our own experiments have led us to believe that water vapor is the main causal factor behind the arc initiation process. Assuming the main component of the expelled plasma cloud by weight is water, the fastest process available is dissociative recombination (H2O(+) + e(-) (goes to) H* + OH*). A model that agrees with the observed dependency of arc current pulse width on the square root of capacitance is presented. A 400 MHz digital storage scope and current probe was used to detect arcs at the triple junction of a solar array. Simultaneous measurements of the arc trigger pulse, the gate pulse, the arc current and the arc voltage were then obtained. Finally, a large number of measurements of individual arc spectra were obtained in very short time intervals, ranging from 10 to 30 microseconds, using a 1/4 a spectrometer coupled with a gated intensified CCD. The spectrometer was systematically tuned to obtain optical arc spectra over the entire wavelength range of 260 to 680 nanometers. All relevant atomic lines and molecular bands were then identified.

  8. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  9. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  10. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  11. Fabrication of microlens arrays on a glass substrate by roll-to-roll process with PDMS mold

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Su, Guo-Dung J.

    2009-08-01

    This paper presents a roll-to-roll method to fabricate microlens arrays on a glass substrate by using a cost-effective PDMS (Polydimethylsiloxane) mold. We fabricated microlens arrays mold, which was made by photoresist(AZ4620), on the silicon substrate by thermal reflow process, and transferred the pattern to PDMS film. Roll-to-roll system is a standard printing process whose roller is made of acrylic cylinder surrounded with the PDMS mold. UV resin was chosen to be the material to make microlens in rolling process with UV light curing. We investigated the quality of microlens arrays by changing the parameters, such as embossing pressure and rolling speed, to ensure good quality of microlens arrays.

  12. Context-Aware Design for Process Flexibility and Adaptation

    ERIC Educational Resources Information Center

    Yao, Wen

    2012-01-01

    Today's organizations face continuous and unprecedented changes in their business environment. Traditional process design tools tend to be inflexible and can only support rigidly defined processes (e.g., order processing in the supply chain). This considerably restricts their real-world applications value, especially in the dynamic and…

  13. Adaptation and habitat selection in the eco-evolutionary process

    PubMed Central

    Morris, Douglas W.

    2011-01-01

    The struggle for existence occurs through the vital rates of population growth. This basic fact demonstrates the tight connection between ecology and evolution that defines the emerging field of eco-evolutionary dynamics. An effective synthesis of the interdependencies between ecology and evolution is grounded in six principles. The mechanics of evolution specifies the origin and rules governing traits and evolutionary strategies. Traits and evolutionary strategies achieve their selective value through their functional relationships with fitness. Function depends on the underlying structure of variation and the temporal, spatial and organizational scales of evolution. An understanding of how changes in traits and strategies occur requires conjoining ecological and evolutionary dynamics. Adaptation merges these five pillars to achieve a comprehensive understanding of ecological and evolutionary change. I demonstrate the value of this world-view with reference to the theory and practice of habitat selection. The theory allows us to assess evolutionarily stable strategies and states of habitat selection, and to draw the adaptive landscapes for habitat-selecting species. The landscapes can then be used to forecast future evolution under a variety of climate change and other scenarios. PMID:21613295

  14. Adaptation as a Political Process: Adjusting to Drought and Conflict in Kenya's Drylands

    NASA Astrophysics Data System (ADS)

    Eriksen, Siri; Lind, Jeremy

    2009-05-01

    In this article, we argue that people’s adjustments to multiple shocks and changes, such as conflict and drought, are intrinsically political processes that have uneven outcomes. Strengthening local adaptive capacity is a critical component of adapting to climate change. Based on fieldwork in two areas in Kenya, we investigate how people seek to access livelihood adjustment options and promote particular adaptation interests through forming social relations and political alliances to influence collective decision-making. First, we find that, in the face of drought and conflict, relations are formed among individuals, politicians, customary institutions, and government administration aimed at retaining or strengthening power bases in addition to securing material means of survival. Second, national economic and political structures and processes affect local adaptive capacity in fundamental ways, such as through the unequal allocation of resources across regions, development policy biased against pastoralism, and competition for elected political positions. Third, conflict is part and parcel of the adaptation process, not just an external factor inhibiting local adaptation strategies. Fourth, there are relative winners and losers of adaptation, but whether or not local adjustments to drought and conflict compound existing inequalities depends on power relations at multiple geographic scales that shape how conflicting interests are negotiated locally. Climate change adaptation policies are unlikely to be successful or minimize inequity unless the political dimensions of local adaptation are considered; however, existing power structures and conflicts of interests represent political obstacles to developing such policies.

  15. Adaptation as a political process: adjusting to drought and conflict in Kenya's drylands.

    PubMed

    Eriksen, Siri; Lind, Jeremy

    2009-05-01

    In this article, we argue that people's adjustments to multiple shocks and changes, such as conflict and drought, are intrinsically political processes that have uneven outcomes. Strengthening local adaptive capacity is a critical component of adapting to climate change. Based on fieldwork in two areas in Kenya, we investigate how people seek to access livelihood adjustment options and promote particular adaptation interests through forming social relations and political alliances to influence collective decision-making. First, we find that, in the face of drought and conflict, relations are formed among individuals, politicians, customary institutions, and government administration aimed at retaining or strengthening power bases in addition to securing material means of survival. Second, national economic and political structures and processes affect local adaptive capacity in fundamental ways, such as through the unequal allocation of resources across regions, development policy biased against pastoralism, and competition for elected political positions. Third, conflict is part and parcel of the adaptation process, not just an external factor inhibiting local adaptation strategies. Fourth, there are relative winners and losers of adaptation, but whether or not local adjustments to drought and conflict compound existing inequalities depends on power relations at multiple geographic scales that shape how conflicting interests are negotiated locally. Climate change adaptation policies are unlikely to be successful or minimize inequity unless the political dimensions of local adaptation are considered; however, existing power structures and conflicts of interests represent political obstacles to developing such policies.

  16. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  17. Phase velocity tomography of surface waves using ambient noise cross correlation and array processing

    NASA Astrophysics Data System (ADS)

    Boué, Pierre; Roux, Philippe; Campillo, Michel; Briand, Xavier

    2014-01-01

    Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross correlations to extract a particular wave component of the reconstructed signals. We focus here on the extraction of the surface waves to measure phase velocity variations with great accuracy. DBF acts as a spatial filter between two distant subarrays after cross correlation of the wavefield between each single receiver pair. During the DBF process, horizontal slowness and azimuth are used to select the wavefront on both subarray sides. DBF increases the signal-to-noise ratio, which improves the extraction of the dispersive wave packets. This combination of cross correlation and DBF is used on the Transportable Array (USArray), for the central U.S. region. A standard model of surface wave propagation is constructed from a combination of the DBF and cross correlations at different offsets and for different frequency bands. The perturbation (phase shift) between each beam and the standard model is inverted. High-resolution maps of the phase velocity of Rayleigh and Love waves are then constructed. Finally, the addition of azimuthal information provided by DBF is discussed, to construct curved rays that replace the classical great-circle path assumption.

  18. Concurrent processing adaptation of aeroplastic analysis of propfans

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1990-01-01

    Discussed here is a study involving the adaptation of an advanced aeroelastic analysis program to run concurrently on a shared memory multiple processor computer. The program uses a three-dimensional compressible unsteady aerodynamic model and blade normal modes to calculate aeroelastic stability and response of propfan blades. The identification of the computational parallelism within the sequential code and the scheduling of the concurrent subtasks to minimize processor idle time are discussed. Processor idle time in the calculation of the unsteady aerodynamic coefficients was reduced by the simple strategy of appropriately ordering the computations. Speedup and efficiency results are presented for the calculation of the matched flutter point of an experimental propfan model. The results show that efficiencies above 70 percent can be obtained using the present implementation with 7 processors. The parallel computational strategy described here is also applicable to other aeroelastic analysis procedures based on panel methods.

  19. Adaptive information processing in auditory cortex. Annual report, 1 June 1987-31 May 1988

    SciTech Connect

    Weinberger, N.M.

    1988-05-31

    The fact that learning induces frequency-specific modification of receptive fields in auditory cortex implies that the functional organization of auditory (and perhaps other sensory) cortex comprises an adaptively-constituted information base. This project initiates the first systematic investigation of adaptive information processing in cerebral cortex. A major goal is to determine the circumstances under which adaptive information processing is induced by experience. This project also addresses central hypotheses about rules that govern adaptive information processing, at three levels of spatial scale: (a) parallel processing in different auditory fields: (b) modular processing in different cortical lamina within fields; (c) local processing in different neurons within the same locus within lamina. The author emphasized determining the learning circumstances under which adaptive information processing is invoked by the brain. Current studies reveal that the frequency receptive fields of neurons in the auditory cortex, and the physiologically plastic magnocellular medial geniculate nucleus, develop frequency-specific modification such that maximal shifts in tuning are at or adjacent to the signal frequency. Further, this adaptive re-tuning of neurons develops rapidly during habituation, classical conditioning, and instrumental avoidance conditioning. The generality of re-tuning has established that AIP during learning represents a general brain strategy for the acquisition and subsequent processing of information.

  20. Coupled process of plastics pyrolysis and chemical vapor deposition for controllable synthesis of vertically aligned carbon nanotube arrays

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhang, Qiang; Luo, Guohua; Huang, Jia-Qi; Zhao, Meng-Qiang; Wei, Fei

    2010-08-01

    Efficient conversion of waste plastics into advanced materials is of conspicuous environmental, social and economic benefits. A coupled process of plastic pyrolysis and chemical vapor deposition for vertically aligned carbon nanotube (CNT) array growth was proposed. Various kinds of plastics, such as polypropylene, polyethylene, and polyvinyl chloride, were used as carbon sources for the controllable growth of CNT arrays. The relationship between the length of CNT arrays and the growth time was investigated. It was found that the length of aligned CNTs increased with prolonged growth time. CNT arrays with a length of 500 μm were obtained for a 40-min growth and the average growth rate was estimated to be 12 μm/min. The diameter of CNTs in the arrays can be modulated by controlling the growth temperature and the feeding rate of ferrocene. In addition, substrates with larger specific surface area such as ceramic spheres, quartz fibers, and quartz particles, were adopted to support the growth of CNT arrays. Those results provide strong evidence for the feasibility of conversion from waste plastics into CNT arrays via this reported sustainable materials processing.

  1. Correlation of lattice defects and thermal processing in the crystallization of titania nanotube arrays

    NASA Astrophysics Data System (ADS)

    Hosseinpour, Pegah M.; Yung, Daniel; Panaitescu, Eugen; Heiman, Don; Menon, Latika; Budil, David; Lewis, Laura H.

    2014-12-01

    Titania nanotubes have the potential to be employed in a wide range of energy-related applications such as solar energy-harvesting devices and hydrogen production. As the functionality of titania nanostructures is critically affected by their morphology and crystallinity, it is necessary to understand and control these factors in order to engineer useful materials for green applications. In this study, electrochemically-synthesized titania nanotube arrays were thermally processed in inert and reducing environments to isolate the role of post-synthesis processing conditions on the crystallization behavior, electronic structure and morphology development in titania nanotubes, correlated with the nanotube functionality. Structural and calorimetric studies revealed that as-synthesized amorphous nanotubes crystallize to form the anatase structure in a three-stage process that is facilitated by the creation of structural defects. It is concluded that processing in a reducing gas atmosphere versus in an inert environment provides a larger unit cell volume and a higher concentration of Ti3+ associated with oxygen vacancies, thereby reducing the activation energy of crystallization. Further, post-synthesis annealing in either reducing or inert atmospheres produces pronounced morphological changes, confirming that the nanotube arrays thermally transform into a porous morphology consisting of a fragmented tubular architecture surrounded by a network of connected nanoparticles. This study links explicit data concerning morphology, crystallization and defects, and shows that the annealing gas environment determines the details of the crystal structure, the electronic structure and the morphology of titania nanotubes. These factors, in turn, impact the charge transport and consequently the functionality of these nanotubes as photocatalysts.

  2. Adaptation to Leftward-shifting Prisms Enhances Local Processing in Healthy Individuals

    PubMed Central

    Reed, Scott A.; Dassonville, Paul

    2014-01-01

    In healthy individuals, adaptation to left-shifting prisms has been shown to simulate the symptoms of hemispatial neglect, including a reduction in global processing that approximates the local bias observed in neglect patients. The current study tested whether leftward prism adaptation can more specifically enhance local processing abilities. In three experiments, the impact of local and global processing was assessed through tasks that measure susceptibility to illusions that are known to be driven by local or global contextual effects. Susceptibility to the rod-and-frame illusion – an illusion disproportionately driven by both local and global effects depending on frame size – was measured before and after adaptation to left- and right-shifting prisms. A significant increase in rod-and-frame susceptibility was found for the left-shifting prism group, suggesting that adaptation caused an increase in local processing effects. The results of a second experiment confirmed that leftward prism adaptation enhances local processing, as assessed with susceptibility to the simultaneous-tilt illusion. A final experiment employed a more specific measure of the global effect typically associated with the rod-andframe illusion, and found that although the global effect was somewhat diminished after leftward prism adaptation, the trend failed to reach significance (p = .078). Rightward prism adaptation had no significant effects on performance in any of the experiments. Combined, these findings indicate that leftward prism adaptation in healthy individuals can simulate the local processing bias of neglect patients primarily through an increased sensitivity to local visual cues, and confirm that prism adaptation not only modulates lateral shifts of attention, but also prompts shifts from one level of processing to another. PMID:24560913

  3. Adaptation to leftward-shifting prisms enhances local processing in healthy individuals.

    PubMed

    Reed, Scott A; Dassonville, Paul

    2014-04-01

    In healthy individuals, adaptation to left-shifting prisms has been shown to simulate the symptoms of hemispatial neglect, including a reduction in global processing that approximates the local bias observed in neglect patients. The current study tested whether leftward prism adaptation can more specifically enhance local processing abilities. In three experiments, the impact of local and global processing was assessed through tasks that measure susceptibility to illusions that are known to be driven by local or global contextual effects. Susceptibility to the rod-and-frame illusion - an illusion disproportionately driven by both local and global effects depending on frame size - was measured before and after adaptation to left- and right-shifting prisms. A significant increase in rod-and-frame susceptibility was found for the left-shifting prism group, suggesting that adaptation caused an increase in local processing effects. The results of a second experiment confirmed that leftward prism adaptation enhances local processing, as assessed with susceptibility to the simultaneous-tilt illusion. A final experiment employed a more specific measure of the global effect typically associated with the rod-and-frame illusion, and found that although the global effect was somewhat diminished after leftward prism adaptation, the trend failed to reach significance (p=.078). Rightward prism adaptation had no significant effects on performance in any of the experiments. Combined, these findings indicate that leftward prism adaptation in healthy individuals can simulate the local processing bias of neglect patients primarily through an increased sensitivity to local visual cues, and confirm that prism adaptation not only modulates lateral shifts of attention, but also prompts shifts from one level of processing to another.

  4. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  5. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  6. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  7. Local adaptation in Trinidadian guppies alters ecosystem processes.

    PubMed

    Bassar, Ronald D; Marshall, Michael C; López-Sepulcre, Andrés; Zandonà, Eugenia; Auer, Sonya K; Travis, Joseph; Pringle, Catherine M; Flecker, Alexander S; Thomas, Steven A; Fraser, Douglas F; Reznick, David N

    2010-02-23

    Theory suggests evolutionary change can significantly influence and act in tandem with ecological forces via ecological-evolutionary feedbacks. This theory assumes that significant evolutionary change occurs over ecologically relevant timescales and that phenotypes have differential effects on the environment. Here we test the hypothesis that local adaptation causes ecosystem structure and function to diverge. We demonstrate that populations of Trinidadian guppies (Poecilia reticulata), characterized by differences in phenotypic and population-level traits, differ in their impact on ecosystem properties. We report results from a replicated, common garden mesocosm experiment and show that differences between guppy phenotypes result in the divergence of ecosystem structure (algal, invertebrate, and detrital standing stocks) and function (gross primary productivity, leaf decomposition rates, and nutrient flux). These phenotypic effects are further modified by effects of guppy density. We evaluated the generality of these effects by replicating the experiment using guppies derived from two independent origins of the phenotype. Finally, we tested the ability of multiple guppy traits to explain observed differences in the mesocosms. Our findings demonstrate that evolution can significantly affect both ecosystem structure and function. The ecosystem differences reported here are consistent with patterns observed across natural streams and argue that guppies play a significant role in shaping these ecosystems. PMID:20133670

  8. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing

  9. Adaptive Memory: The Evolutionary Significance of Survival Processing.

    PubMed

    Nairne, James S; Pandeirada, Josefa N S

    2016-07-01

    A few seconds of survival processing, during which people assess the relevance of information to a survival situation, produces particularly good retention. One interpretation of this benefit is that our memory systems are optimized to process and retain fitness-relevant information. Such a "tuning" may exist, in part, because our memory systems were shaped by natural selection, using a fitness-based criterion. However, recent research suggests that traditional mnemonic processes, such as elaborative processing, may play an important role in producing the empirical benefit. Boundary conditions have been demonstrated as well, leading some to dismiss evolutionary interpretations of the phenomenon. In this article, we discuss the current state of the evolutionary account and provide a general framework for evaluating evolutionary and purportedly nonevolutionary interpretations of mnemonic phenomena. We suggest that survival processing effects are best viewed within the context of a general survival optimization system, designed by nature to help organisms deal with survival challenges. An important component of survival optimization is the ability to simulate activities that help to prevent or escape from future threats which, in turn, depends in an important way on accurate retrospective remembering of survival-relevant information. PMID:27474137

  10. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  11. The Adapted Dance Process: Planning, Partnering, and Performing

    ERIC Educational Resources Information Center

    Block, Betty A.; Johnson, Peggy V.

    2011-01-01

    This article contains specific planning, partnering, and performing techniques for fully integrating dancers with special needs into a dance pedagogy program. Each aspect is discussed within the context of the domains of learning. Fundamental partnering strategies are related to each domain as part of the integration process. The authors recommend…

  12. Computer simulation program is adaptable to industrial processes

    NASA Technical Reports Server (NTRS)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  13. Effects of Crowding and Attention on High-Levels of Motion Processing and Motion Adaptation

    PubMed Central

    Pavan, Andrea; Greenlee, Mark W.

    2015-01-01

    The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e., phantom MAE). In the present study we used global rotating patterns to measure the strength of the conventional and phantom MAEs in crowded and non-crowded conditions, and when attention was directed to the adapting stimulus and when it was diverted away from the adapting stimulus. The results show that: (i) the phantom MAE is weaker than the conventional MAE, for both non-crowded and crowded conditions, and when attention was focused on the adapting stimulus and when it was diverted from it, (ii) conventional and phantom MAEs in the crowded condition are weaker than in the non-crowded condition. Analysis conducted to assess the effect of crowding on high-level of motion adaptation suggests that crowding is likely to affect the awareness of the adapting stimulus rather than degrading its sensory representation, (iii) for high-level of motion processing the attentional manipulation does not affect the strength of either conventional or phantom MAEs, neither in the non-crowded nor in the crowded conditions. These results suggest that high-level MAEs do not depend on attention and that at high-level of motion adaptation the effects of crowding are not modulated by attention. PMID:25615577

  14. Effects of crowding and attention on high-levels of motion processing and motion adaptation.

    PubMed

    Pavan, Andrea; Greenlee, Mark W

    2015-01-01

    The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e., phantom MAE). In the present study we used global rotating patterns to measure the strength of the conventional and phantom MAEs in crowded and non-crowded conditions, and when attention was directed to the adapting stimulus and when it was diverted away from the adapting stimulus. The results show that: (i) the phantom MAE is weaker than the conventional MAE, for both non-crowded and crowded conditions, and when attention was focused on the adapting stimulus and when it was diverted from it, (ii) conventional and phantom MAEs in the crowded condition are weaker than in the non-crowded condition. Analysis conducted to assess the effect of crowding on high-level of motion adaptation suggests that crowding is likely to affect the awareness of the adapting stimulus rather than degrading its sensory representation, (iii) for high-level of motion processing the attentional manipulation does not affect the strength of either conventional or phantom MAEs, neither in the non-crowded nor in the crowded conditions. These results suggest that high-level MAEs do not depend on attention and that at high-level of motion adaptation the effects of crowding are not modulated by attention.

  15. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing

    PubMed Central

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-01-01

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as “multiple-missed detection”, and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis. PMID:26295232

  16. A Field-Programmable Analog Array Development Platform for Vestibular Prosthesis Signal Processing

    PubMed Central

    Töreyin, Hakan; Bhatti, Pamela

    2015-01-01

    We report on a vestibular prosthesis signal processor realized using an experimental field programmable analog array (FPAA). Completing signal processing functions in the analog domain, the processor is designed to help replace a malfunctioning inner ear sensory organ, a semicircular canal. Relying on angular head motion detected by an inertial sensor, the signal processor maps angular velocity into meaningful control signals to drive a current stimulator. To demonstrate biphasic pulse control a 1 kΩ resistive load was placed across an H-bridge circuit. When connected to a 2.4 V supply, a biphasic current of 100 μA was maintained at stimulation frequencies from 50–350 Hz, pulsewidths from 25–400 μsec, and interphase gaps ranging from 25–250 sec. PMID:23853331

  17. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  18. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    NASA Astrophysics Data System (ADS)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  19. [Molecular genetic bases of adaptation processes and approaches to their analysis].

    PubMed

    Salmenkova, E A

    2013-01-01

    Great interest in studying the molecular genetic bases of the adaptation processes is explained by their importance in understanding evolutionary changes, in the development ofintraspecific and interspecific genetic diversity, and in the creation of approaches and programs for maintaining and restoring the population. The article examines the sources and conditions for generating adaptive genetic variability and contribution of neutral and adaptive genetic variability to the population structure of the species; methods for identifying the adaptive genetic variability on the genome level are also described. Considerable attention is paid to the potential of new technologies of genome analysis, including next-generation sequencing and some accompanying methods. In conclusion, the important role of the joint use of genomics and proteomics approaches in understanding the molecular genetic bases of adaptation is emphasized.

  20. The influence of negative stimulus features on conflict adaption: evidence from fluency of processing

    PubMed Central

    Fritz, Julia; Fischer, Rico; Dreisbach, Gesine

    2015-01-01

    Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e., the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast). Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed. PMID:25767453

  1. Investigation on fabrication process of dissolving microneedle arrays to improve effective needle drug distribution.

    PubMed

    Wang, Qingqing; Yao, Gangtao; Dong, Pin; Gong, Zihua; Li, Ge; Zhang, Kejian; Wu, Chuanbin

    2015-01-23

    The dissolving microneedle array (DMNA) offers a novel potential approach for transdermal delivery of biological macromolecular drugs and vaccines, because it can be as efficient as hypodermic injection and as safe and patient compliant as conventional transdermal delivery. However, effective needle drug distribution is the main challenge for clinical application of DMNA. This study focused on the mechanism and control of drug diffusion inside DMNA during the fabrication process in order to improve the drug delivery efficiency. The needle drug loading proportion (NDP) in DMNAs was measured to determine the influences of drug concentration gradient, needle drying step, excipients, and solvent of the base solution on drug diffusion and distribution. The results showed that the evaporation of base solvent was the key factor determining NDP. Slow evaporation of water from the base led to gradual increase of viscosity, and an approximate drug concentration equilibrium was built between the needle and base portions, resulting in NDP as low as about 6%. When highly volatile ethanol was used as the base solvent, the viscosity in the base rose quickly, resulting in NDP more than 90%. Ethanol as base solvent did not impact the insertion capability of DMNAs, but greatly increased the in vitro drug release and transdermal delivery from DMNAs. Furthermore, the drug diffusion process during DMNA fabrication was thoroughly investigated for the first time, and the outcomes can be applied to most two-step molding processes and optimization of the DMNA fabrication. PMID:25446513

  2. An Eye-adapted Beamforming for Axial B-scans Free from Crystalline Lens Aberration: In vitro and ex vivo Results with a 20 MHz Linear Array

    NASA Astrophysics Data System (ADS)

    Matéo, Tony; Mofid, Yassine; Grégoire, Jean-Marc; Ossant, Frédéric

    In ophtalmic ultrasonography, axial B-scans are seriously deteriorated owing to the presence of the crystalline lens. This strongly aberrating medium affects both spatial and contrast resolution and causes important distortions. To deal with this issue, an adapted beamforming (BF) has been developed and experimented with a 20 MHz linear array working with a custom US research scanner. The adapted BF computes focusing delays that compensate for crystalline phase aberration, including refraction effects. This BF was tested in vitro by imaging a wire phantom through an eye phantom consisting of a synthetic gelatin lens, shaped according to the unaccommodated state of an adult human crystalline lens, anatomically set up in an appropriate liquid (turpentine) to approach the in vivo velocity ratio. Both image quality and fidelity from the adapted BF were assessed and compared with conventional delay-and-sum BF over the aberrating medium. Results showed 2-fold improvement of the lateral resolution, greater sensitivity and 90% reduction of the spatial error (from 758 μm to 76 μm) with adapted BF compared to conventional BF. Finally, promising first ex vivo axial B-scans of a human eye are presented.

  3. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    NASA Astrophysics Data System (ADS)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  4. Fabrication of dense non-circular nanomagnetic device arrays using self-limiting low-energy glow-discharge processing.

    PubMed

    Zheng, Zhen; Chang, Long; Nekrashevich, Ivan; Ruchhoeft, Paul; Khizroev, Sakhrat; Litvinov, Dmitri

    2013-01-01

    We describe a low-energy glow-discharge process using reactive ion etching system that enables non-circular device patterns, such as squares or hexagons, to be formed from a precursor array of uniform circular openings in polymethyl methacrylate, PMMA, defined by electron beam lithography. This technique is of a particular interest for bit-patterned magnetic recording medium fabrication, where close packed square magnetic bits may improve its recording performance. The process and results of generating close packed square patterns by self-limiting low-energy glow-discharge are investigated. Dense magnetic arrays formed by electrochemical deposition of nickel over self-limiting formed molds are demonstrated.

  5. [Adaptability of sweet corn ears to a frozen process].

    PubMed

    Ramírez Matheus, Alejandra O; Martínez, Norelkys Maribel; de Bertorelli, Ligia O; De Venanzi, Frank

    2004-12-01

    The effects of frozen condition on the quality of three sweet corn ears (2038, 2010, 2004) and the pattern (Bonanza), were evaluated. Biometrics characteristics like ear size, ear diameter, row and kernel deep were measured as well as chemical and physical measurement in fresh and frozen states. The corn ears were frozen at -95 degrees C by 7 minutes. The yield and stability of the frozen ears were evaluated at 45 and 90 days of frozen storage (-18 degrees C). The average commercial yield as frozen corn ear for all the hybrids was 54.2%. The industry has a similar value range of 48% to 54%. The ear size average was 21.57 cm, row number was 15, ear diameter 45.54 mm and the kernel corn deep was 8.57 mm. All these measurements were found not different from commercial values found for the industry. All corn samples evaluated showed good stability despites the frozen processing and storage. Hybrid 2038 ranked higher in quality. PMID:15969270

  6. Investigation of Proposed Process Sequence for the Array Automated Assembly Task, Phase 2. [low cost silicon solar array fabrication

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Bunyan, S.; Pepe, A.

    1979-01-01

    The technological readiness of the proposed process sequence was reviewed. Process steps evaluated include: (1) plasma etching to establish a standard surface; (2) forming junctions by diffusion from an N-type polymeric spray-on source; (3) forming a p+ back contact by firing a screen printed aluminum paste; (4) forming screen printed front contacts after cleaning the back aluminum and removing the diffusion oxide; (5) cleaning the junction by a laser scribe operation; (6) forming an antireflection coating by baking a polymeric spray-on film; (7) ultrasonically tin padding the cells; and (8) assembling cell strings into solar circuits using ethylene vinyl acetate as an encapsulant and laminating medium.

  7. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  8. Biologically inspired large scale chemical sensor arrays and embedded data processing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  9. Impedimetric real-time monitoring of neural pluripotent stem cell differentiation process on microelectrode arrays.

    PubMed

    Seidel, Diana; Obendorf, Janine; Englich, Beate; Jahnke, Heinz-Georg; Semkova, Vesselina; Haupt, Simone; Girard, Mathilde; Peschanski, Marc; Brüstle, Oliver; Robitzki, Andrea A

    2016-12-15

    In today's neurodevelopment and -disease research, human neural stem/progenitor cell-derived networks represent the sole accessible in vitro model possessing a primary phenotype. However, cultivation and moreover, differentiation as well as maturation of human neural stem/progenitor cells are very complex and time-consuming processes. Therefore, techniques for the sensitive non-invasive, real-time monitoring of neuronal differentiation and maturation are highly demanded. Using impedance spectroscopy, the differentiation of several human neural stem/progenitor cell lines was analyzed in detail. After development of an optimum microelectrode array for reliable and sensitive long-term monitoring, distinct cell-dependent impedimetric parameters that could specifically be associated with the progress and quality of neuronal differentiation were identified. Cellular impedance changes correlated well with the temporal regulation of biomolecular progenitor versus mature neural marker expression as well as cellular structure changes accompanying neuronal differentiation. More strikingly, the capability of the impedimetric differentiation monitoring system for the use as a screening tool was demonstrated by applying compounds that are known to promote neuronal differentiation such as the γ-secretase inhibitor DAPT. The non-invasive impedance spectroscopy-based measurement system can be used for sensitive and quantitative monitoring of neuronal differentiation processes. Therefore, this technique could be a very useful tool for quality control of neuronal differentiation and moreover, for neurogenic compound identification and industrial high-content screening demands in the field of safety assessment as well as drug development.

  10. Real-time atmospheric imaging and processing with hybrid adaptive optics and hardware accelerated lucky-region fusion (LRF) algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jony Jiang; Carhart, Gary W.; Beresnev, Leonid A.; Aubailly, Mathieu; Jackson, Christopher R.; Ejzak, Garrett; Kiamilev, Fouad E.

    2014-09-01

    Atmospheric turbulences can significantly deteriorate the performance of long-range conventional imaging systems and create difficulties for target identification and recognition. Our in-house developed adaptive optics (AO) system, which contains high-performance deformable mirrors (DMs) and the fast stochastic parallel gradient decent (SPGD) control mechanism, allows effective compensation of such turbulence-induced wavefront aberrations and result in significant improvement on the image quality. In addition, we developed advanced digital synthetic imaging and processing technique, "lucky-region" fusion (LRF), to mitigate the image degradation over large field-of-view (FOV). The LRF algorithm extracts sharp regions from each image obtained from a series of short exposure frames and fuses them into a final improved image. We further implemented such algorithm into a VIRTEX-7 field programmable gate array (FPGA) and achieved real-time video processing. Experiments were performed by combining both AO and hardware implemented LRF processing technique over a near-horizontal 2.3km atmospheric propagation path. Our approach can also generate a universal real-time imaging and processing system with a general camera link input, a user controller interface, and a DVI video output.

  11. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  12. The process of adapting a universal dating abuse prevention program to adolescents exposed to domestic violence.

    PubMed

    Foshee, Vangie A; Dixon, Kimberly S; Ennett, Susan T; Moracco, Kathryn E; Bowling, J Michael; Chang, Ling-Yin; Moss, Jennifer L

    2015-07-01

    Adolescents exposed to domestic violence are at increased risk of dating abuse, yet no evaluated dating abuse prevention programs have been designed specifically for this high-risk population. This article describes the process of adapting Families for Safe Dates (FSD), an evidenced-based universal dating abuse prevention program, to this high-risk population, including conducting 12 focus groups and 107 interviews with the target audience. FSD includes six booklets of dating abuse prevention information, and activities for parents and adolescents to do together at home. We adapted FSD for mothers who were victims of domestic violence, but who no longer lived with the abuser, to do with their adolescents who had been exposed to the violence. Through the adaptation process, we learned that families liked the program structure and valued being offered the program and that some of our initial assumptions about this population were incorrect. We identified practices and beliefs of mother victims and attributes of these adolescents that might increase their risk of dating abuse that we had not previously considered. In addition, we learned that some of the content of the original program generated negative family interactions for some. The findings demonstrate the utility of using a careful process to adapt evidence-based interventions (EBIs) to cultural sub-groups, particularly the importance of obtaining feedback on the program from the target audience. Others can follow this process to adapt EBIs to groups other than the ones for which the original EBI was designed.

  13. Processes discriminating adaptive and maladaptive Internet use among European adolescents highly engaged online.

    PubMed

    Tzavela, Eleni C; Karakitsou, Chryssoula; Dreier, Michael; Mavromati, Foteini; Wölfling, Klaus; Halapi, Eva; Macarie, George; Wójcik, Szymon; Veldhuis, Lydian; Tsitsika, Artemis K

    2015-04-01

    Today adolescents are highly engaged online. Contrary to common concern, not all highly engaged adolescents develop maladaptive patterns of internet use. The present qualitative study explored the experiences, patterns and impact of use of 124 adolescents (M(age) = 16.0) reporting signs of internet addictive behaviors. The focus was to discern adaptive and maladaptive use patterns, which promote or interfere with adolescents' development, respectively. Semi-structured individual interviews were conducted in seven European countries (Greece, Spain, Poland, Germany, Romania, Netherlands and Iceland) and qualitatively analyzed using grounded theory. Considerable variability emerged in the way adolescents satisfied their personal needs online and offline, in the experienced impact from high online engagement and functional value ascribed to the internet, and in the self-regulatory processes underlying use. Variability in these discriminating processes was linked to adaptive or maladaptive adolescent internet use patterns. The emerged processes can provide direction for designing prevention and intervention programs promoting adaptive use.

  14. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  15. Planarized process for resonant leaky-wave coupled phase-locked arrays of mid-IR quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Chang, C.-C.; Kirch, J. D.; Boyle, C.; Sigler, C.; Mawst, L. J.; Botez, D.; Zutter, B.; Buelow, P.; Schulte, K.; Kuech, T.; Earles, T.

    2015-03-01

    On-chip resonant leaky-wave coupling of quantum cascade lasers (QCLs) emitting at 8.36 μm has been realized by selective regrowth of interelement layers in curved trenches, defined by dry and wet etching. The fabricated structure provides large index steps (Δn = 0.10) between antiguided-array element and interelement regions. In-phase-mode operation to 5.5 W front-facet emitted power in a near-diffraction-limited far-field beam pattern, with 4.5 W in the main lobe, is demonstrated. A refined fabrication process has been developed to produce phased-locked antiguided arrays of QCLs with planar geometry. The main fabrication steps in this process include non-selective regrowth of Fe:InP in interelement trenches, defined by inductive-coupled plasma (ICP) etching, a chemical polishing (CP) step to planarize the surface, non-selective regrowth of interelement layers, ICP selective etching of interelement layers, and non-selective regrowth of InP cladding layer followed by another CP step to form the element regions. This new process results in planar InGaAs/InP interelement regions, which allows for significantly improved control over the array geometry and the dimensions of element and interelement regions. Such a planar process is highly desirable to realize shorter emitting wavelength (4.6 μm) arrays, where fabrication tolerance for single-mode operation are tighter compared to 8 μm-emitting devices.

  16. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  17. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  18. Medical ultrasound digital beamforming on a massively parallel processing array platform

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad

    2008-03-01

    Digital beamforming has been widely used in modern medical ultrasound instruments. Flexibility is the key advantage of a digital beamformer over the traditional analog approach. Unlike analog delay lines, digital delay can be programmed to implement new ways of beam shaping and beam steering without hardware modification. Digital beamformers can also be focused dynamically by tracking the depth and focusing the receive beam as the depth increases. By constantly updating an element weight table, a digital beamformer can dynamically increase aperture size with depth to maintain constant lateral resolution and reduce sidelobe noise. Because ultrasound digital beamformers have high I/O bandwidth and processing requirements, traditionally they have been implemented using ASICs or FPGAs that are costly both in time and in money. This paper introduces a sample implementation of a digital beamformer that is programmed in software on a Massively Parallel Processor Array (MPPA). The system consists of a host PC and a PCI Express-based beamformer accelerator with an Ambric Am2045 MPPA chip and 512 Mbytes of external memory. The Am2045 has 336 asynchronous RISCDSP processors that communicate through a configurable structure of channels, using a self-synchronizing communication protocol.

  19. Micro-processing of Hybrid Field-Effect Transistor Arrays using Picosecond Lasers

    NASA Astrophysics Data System (ADS)

    Ireland, Robert; Liu, Yu; Spalenka, Josef; Jaiswal, Supriya; Oishi, Shingo; Fukumitsu, Kenshi; Ryosuke, Mochizuki; Gopalan, Padma; Evans, Paul; Katz, Howard

    2014-03-01

    We use a solid-state picosecond laser to pattern thin-film semiconductors that completely cover a substrate and utilize an array of top-contact electrodes, particularly for materials with high chemical sensitivity or resistance. Picosecond laser processing is fully data-driven, both thermally and mechanically non-invasive, and exploits highly localized non-linear optical effects. We investigate FETs comprised of p-channel tellurium and organic semiconductor molecules sequentially vapor-deposited onto Si/SiO2 substrates. Secondly, zinc oxide and zinc-tin oxide are used for high mobility n-channel FETs, cast onto Si/SiO2 by sol-gel method. Finally, zinc oxide FETs are prepared as photomodulatable devices using rhenium bipyridine as a light-sensitive electron-donating molecule. The laser effectively isolates FETs while charge carrier mobility is maintained, but leakage currents through the FET dielectric are drastically reduced, and other functions are enhanced. For instance, the ratio of measured gate current to photocurrent for photomodulatable FETs drops from a factor of five to zero after laser isolation, in both forward and reverse bias. We also observe a threshold voltage shift in organic semiconductors after laser isolation, possibly due to local charging effects.

  20. Solution-Processed Organic Thin-Film Transistor Array for Active-Matrix Organic Light-Emitting Diode

    NASA Astrophysics Data System (ADS)

    Harada, Chihiro; Hata, Takuya; Chuman, Takashi; Ishizuka, Shinichi; Yoshizawa, Atsushi

    2013-05-01

    We developed a 3-in. organic thin-film transistor (OTFT) array with an ink-jetted organic semiconductor. All layers except electrodes were fabricated by solution processes. The OTFT performed well without hysteresis, and the field-effect mobility in the saturation region was 0.45 cm2 V-1 s-1, the threshold voltage was 3.3 V, and the on/off current ratio was more than 106. We demonstrated a 3-in. active-matrix organic light-emitting diode (AMOLED) display driven by the OTFT array. The display could provide clear moving images. The peak luminance of the display was 170 cd/m2.

  1. Obsessive-Compulsive Disorder: The Process of Parental Adaptation and Implications for Genetic Counseling.

    PubMed

    Andrighetti, Heather; Semaka, Alicia; Stewart, S Evelyn; Shuman, Cheryl; Hayeems, Robin; Austin, Jehannine

    2016-10-01

    Obsessive-compulsive disorder (OCD) has primarily pediatric onset and well-documented unique impacts on family functioning. Limited research has assessed the understanding that parents of children with OCD have of the etiology of the condition, and there are no data regarding potential applications of genetic counseling for this population. We recruited 13 parents of 13 children diagnosed with OCD from the OCD Registry at British Columbia Children's Hospital, and conducted qualitative semi-structured telephone interviews to explore participants' experiences with their child's OCD, causal attributions of OCD, and perceptions of two genetic counseling vignettes. Interviews were audio-recorded, transcribed, and analyzed using elements of grounded theory qualitative methodology. Analysis revealed key components and contextual elements of the process through which parents adapt to their child's OCD. This adaptation process involved conceptualizing the meaning of OCD, navigating its impact on family dynamics, and developing effective illness management strategies. Adaptation took place against a backdrop of stigmatization and was shaped by participants' family history of mental illness and their child's specific manifestations of OCD. Parents perceived genetic counseling, as described in the vignettes, as being empowering, alleviating guilt and blame, and positively impacting treatment orientation. These data provide insight into the process of parental adaptation to pediatric OCD, and suggest that genetic counseling services for families affected by OCD may help facilitate adaptation to this illness.

  2. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Dey, Dibyendu; Memis, Omer G.; Katsnelson, Alex; Mohseni, Hooman

    2008-03-01

    Fabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP) silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  3. Adaptation to leftward-shifting prisms reduces the global processing bias of healthy individuals.

    PubMed

    Bultitude, Janet H; Woods, Jill M

    2010-05-01

    When healthy individuals are presented with peripheral figures in which small letters are arranged to form a large letter, they are faster to identify the global- than the local-level information, and have difficulty ignoring global information when identifying the local level. The global reaction time (RT) advantage and global interference effect imply preferential processing of global-level information in the normal brain. This contrasts with the local processing bias demonstrated following lesions to the right temporo-parietal junction (TPJ), such as those that lead to hemispatial neglect (neglect). Recent research from our lab demonstrated that visuo-motor adaptation to rightward-shifting prisms, which ameliorates many leftward performance deficits of neglect patients, improved the local processing bias of patients with right TPJ lesions (Bultitude, Rafal, & List, 2009). Here we demonstrate that adaptation to leftward-shifting prisms, which can induce neglect-like performance in neurologically healthy individuals, also reduces the normal global processing bias. Forty-eight healthy participants were asked to identify the global or local forms of hierarchical figures before and after adaptation to leftward- or rightward-shifting prisms. Prior to prism adaptation, both groups had greater difficulty ignoring irrelevant global information when identifying the local level (global interference) compared to their ability to ignore irrelevant local-level information when identifying the global level (local interference). Participants who adapted to leftward-shifting prisms showed a significant reduction in global interference, but there was no change in the performance of the rightward-shifting Prism Group. These results show, for the first time, that in addition to previously demonstrated effects on lateralised attention, prism adaptation can influence non-lateralised spatial attention in healthy individuals.

  4. Cultural adaptation process for international dissemination of the strengthening families program.

    PubMed

    Kumpfer, Karol L; Pinyuchon, Methinin; Teixeira de Melo, Ana; Whiteside, Henry O

    2008-06-01

    The Strengthening Families Program (SFP) is an evidence-based family skills training intervention developed and found efficacious for substance abuse prevention by U.S researchers in the 1980s. In the 1990s, a cultural adaptation process was developed to transport SFP for effectiveness trials with diverse populations (African, Hispanic, Asian, Pacific Islander, and Native American). Since 2003, SFP has been culturally adapted for use in 17 countries. This article reviews the SFP theory and research and a recommended cultural adaptation process. Challenges in international dissemination of evidence-based programs (EBPs) are discussed based on the results of U.N. and U.S. governmental initiatives to transport EBP family interventions to developing countries. The technology transfer and quality assurance system are described, including the language translation and cultural adaptation process for materials development, staff training, and on-site and online Web-based supervision and technical assistance and evaluation services to assure quality implementation and process evaluation feedback for improvements.

  5. Access to Learning for Handicapped Children: A Handbook on the Instructional Adaptation Process. Field Test Version.

    ERIC Educational Resources Information Center

    Changar, Jerilynn; And Others

    The manual describes the results of a 36 month project to determine ways to modify existing curricula to meet the needs of special needs students in the mainstream. The handbook is designed in the main for administrators and facilitators as well as for teacher-adaptors. Each of eight steps in the adaptation process is broken down according to…

  6. Methods of Adapting Digital Content for the Learning Process via Mobile Devices

    ERIC Educational Resources Information Center

    Lopez, J. L. Gimenez; Royo, T. Magal; Laborda, Jesus Garcia; Calvo, F. Garde

    2009-01-01

    This article analyses different methods of adapting digital content for its delivery via mobile devices taking into account two aspects which are a fundamental part of the learning process; on the one hand, functionality of the contents, and on the other, the actual controlled navigation requirements that the learner needs in order to acquire high…

  7. An Approach to Evaluating Adolescent Adaptive Processes: Validity of an Interview-Based Measure.

    ERIC Educational Resources Information Center

    Beardslee, William R.; And Others

    1986-01-01

    An initial exploration of the validity of 15 scales designed to assess adaptive ego processes in adolescence is presented. Diabetic youngsters, psychiatric patients, and high school students with no illness are compared using the scales. Correlations are found between the scales and a separate, conceptually related measure of ego development.…

  8. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  9. Simpler Adaptive Optics using a Single Device for Processing and Control

    NASA Astrophysics Data System (ADS)

    Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.

    The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a < 5 V power supply. AO systems using a single such device for all data processing and control may present a far simpler, cheaper, smaller and more efficient solution than existing systems. A novel AO system design based around a single, low-cost controller is presented. The objective is to determine the performance which can be achieved in terms of bandwidth and correction order, with a focus on optimisation and parallelisation of AO algorithms such as wavefront measurement and reconstruction. The AO system consists of a Shack-Hartmann wavefront sensor and a deformable mirror to correct light from a 1.8 m telescope for the purpose of imaging orbiting satellites. The

  10. Developing Smart Seismic Arrays: A Simulation Environment, Observational Database, and Advanced Signal Processing

    SciTech Connect

    Harben, P E; Harris, D; Myers, S; Larsen, S; Wagoner, J; Trebes, J; Nelson, K

    2003-09-15

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3D finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic

  11. An adaptive altitude information fusion method for autonomous landing processes of small unmanned aerial rotorcraft.

    PubMed

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  12. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  13. Adaptive automation of human-machine system information-processing functions.

    PubMed

    Kaber, David B; Wright, Melanie C; Prinzel, Lawrence J; Clamann, Michael P

    2005-01-01

    The goal of this research was to describe the ability of human operators to interact with adaptive automation (AA) applied to various stages of complex systems information processing, defined in a model of human-automation interaction. Forty participants operated a simulation of an air traffic control task. Automated assistance was adaptively applied to information acquisition, information analysis, decision making, and action implementation aspects of the task based on operator workload states, which were measured using a secondary task. The differential effects of the forms of automation were determined and compared with a manual control condition. Results of two 20-min trials of AA or manual control revealed a significant effect of the type of automation on performance, particularly during manual control periods as part of the adaptive conditions. Humans appear to better adapt to AA applied to sensory and psychomotor information-processing functions (action implementation) than to AA applied to cognitive functions (information analysis and decision making), and AA is superior to completely manual control. Potential applications of this research include the design of automation to support air traffic controller information processing.

  14. Approaches to evaluating climate change impacts on species: a guide to initiating the adaptation planning process.

    PubMed

    Rowland, Erika L; Davison, Jennifer E; Graumlich, Lisa J

    2011-03-01

    Assessing the impact of climate change on species and associated management objectives is a critical initial step for engaging in the adaptation planning process. Multiple approaches are available. While all possess limitations to their application associated with the uncertainties inherent in the data and models that inform their results, conducting and incorporating impact assessments into the adaptation planning process at least provides some basis for making resource management decisions that are becoming inevitable in the face of rapidly changing climate. Here we provide a non-exhaustive review of long-standing (e.g., species distribution models) and newly developed (e.g., vulnerability indices) methods used to anticipate the response to climate change of individual species as a guide for managers grappling with how to begin the climate change adaptation process. We address the limitations (e.g., uncertainties in climate change projections) associated with these methods, and other considerations for matching appropriate assessment approaches with the management questions and goals. Thorough consideration of the objectives, scope, scale, time frame and available resources for a climate impact assessment allows for informed method selection. With many data sets and tools available on-line, the capacity to undertake and/or benefit from existing species impact assessments is accessible to those engaged in resource management. With some understanding of potential impacts, even if limited, adaptation planning begins to move toward the development of management strategies and targeted actions that may help to sustain functioning ecosystems and their associated services into the future.

  15. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  16. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  17. Two-Dimensional Systolic Array For Kalman-Filter Computing

    NASA Technical Reports Server (NTRS)

    Chang, Jaw John; Yeh, Hen-Geul

    1988-01-01

    Two-dimensional, systolic-array, parallel data processor performs Kalman filtering in real time. Algorithm rearranged to be Faddeev algorithm for generalized signal processing. Algorithm mapped onto very-large-scale integrated-circuit (VLSI) chip in two-dimensional, regular, simple, expandable array of concurrent processing cells. Processor does matrix/vector-based algebraic computations. Applications include adaptive control of robots, remote manipulators and flexible structures and processing radar signals to track targets.

  18. Recognition Time for Letters and Nonletters: Effects of Serial Position, Array Size, and Processing Order.

    ERIC Educational Resources Information Center

    Mason, Mildred

    1982-01-01

    Three experiments report additional evidence that it is a mistake to account for all interletter effects solely in terms of sensory variables. These experiments attest to the importance of structural variables such as retina location, array size, and ordinal position. (Author/PN)

  19. 2D Array of Far-infrared Thermal Detectors: Noise Measurements and Processing Issues

    NASA Technical Reports Server (NTRS)

    Lakew, B.; Aslam, S.; Stevenson, T.

    2008-01-01

    A magnesium diboride (MgB2) detector 2D array for use in future space-based spectrometers is being developed at GSFC. Expected pixel sensitivities and comparison to current state-of-the-art infrared (IR) detectors will be discussed.

  20. Cognitive adaptation: spatial memory or attentional processing. a comment on Furley and Memmert (2010).

    PubMed

    Allen, R; Fioratou, E; McGeorge, P

    2011-02-01

    This commentary considers the paper by Furley and Memmert (2010) who sought to test the respective validities of the specific processing and cognitive adaptation hypotheses. That they found no evidence of a difference between experienced basketball players and nonathletes on the Corsi block task, a measure of spatial memory, led them to infer support for the specific processing hypothesis, namely that differences between experts and novices manifest themselves only in processes related specifically to the domain of expertise. An alternative interpretation is offered, indicating possible confounds and referring to recent research that suggests Corsi block and dynamic spatial tasks depend upon different neuronal networks.

  1. Adaptation of swallowing hyo-laryngeal kinematics is distinct in oral vs. pharyngeal sensory processing

    PubMed Central

    Lokhande, Akshay; Christopherson, Heather; German, Rebecca; Stone, Alice

    2012-01-01

    Before a bolus is pushed into the pharynx, oral sensory processing is critical for planning movements of the subsequent pharyngeal swallow, including hyoid bone and laryngeal (hyo-laryngeal) kinematics. However, oral and pharyngeal sensory processing for hyo-laryngeal kinematics is not fully understood. In 11 healthy adults, we examined changes in kinematics with sensory adaptation, sensitivity shifting, with oropharyngeal swallows vs. pharyngeal swallows (no oral processing), and with various bolus volumes and tastes. Only pharyngeal swallows showed sensory adaptation (gradual changes in kinematics with repeated exposure to the same bolus). Conversely, only oropharyngeal swallows distinguished volume differences, whereas pharyngeal swallows did not. No taste effects were observed for either swallow type. The hyo-laryngeal kinematics were very similar between oropharyngeal swallows and pharyngeal swallows with a comparable bolus. Sensitivity shifting (changing sensory threshold for a small bolus when it immediately follows several very large boluses) was not observed in pharyngeal or oropharyngeal swallowing. These findings indicate that once oral sensory processing has set a motor program for a specific kind of bolus (i.e., 5 ml water), hyo-laryngeal movements are already highly standardized and optimized, showing no shifting or adaptation regardless of repeated exposure (sensory adaptation) or previous sensory experiences (sensitivity shifting). Also, the oral cavity is highly specialized for differentiating certain properties of a bolus (volume) that might require a specific motor plan to ensure swallowing safety, whereas the pharyngeal cavity does not make the same distinctions. Pharyngeal sensory processing might not be able to adjust motor plans created by the oral cavity once the swallow has already been triggered. PMID:22403349

  2. Adaptive methods of two-scale edge detection in post-enhancement visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2008-04-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvo-cellular (P) and magno-cellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong non-linear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process, and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information, and gracefully shifts from a smallscale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the (P) and (M) channels of the human visual system. We also examine the case of adapting to a third image condition, namely too little visual information, and automatically adjust edge detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  3. Adapting School-Based Substance Use Prevention Curriculum Through Cultural Grounding: A Review and Exemplar of Adaptation Processes for Rural Schools

    PubMed Central

    Colby, Margaret; Hecht, Michael L.; Miller-Day, Michelle; Krieger, Janice L.; Syvertsen, Amy K.; Graham, John W.; Pettigrew, Jonathan

    2014-01-01

    A central challenge facing twenty-first century community-based researchers and prevention scientists is curriculum adaptation processes. While early prevention efforts sought to develop effective programs, taking programs to scale implies that they will be adapted, especially as programs are implemented with populations other than those with whom they were developed or tested. The principle of cultural grounding, which argues that health message adaptation should be informed by knowledge of the target population and by cultural insiders, provides a theoretical rational for cultural regrounding and presents an illustrative case of methods used to reground the keepin’ it REAL substance use prevention curriculum for a rural adolescent population. We argue that adaptation processes like those presented should be incorporated into the design and dissemination of prevention interventions. PMID:22961604

  4. Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes.

    PubMed

    Khalifa, Abdulrahman; Meystre, Stéphane

    2015-12-01

    The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts.

  5. Laser diode edge sensors for adaptive optics segmented arrays: Part 1--external cavity coupling and detector current

    NASA Astrophysics Data System (ADS)

    Remo, John L.

    1994-05-01

    An analytical study of laser diode (LD) operation coupled to external cavity scattering elements, which function as variably coupling reflectors (VCRs), is carried out with the purpose of determining the interrelationship between cavity coupling and intracavity optical intensity which determine the current generated at the rear facet PIN detector. If the external cavity coupling is position sensitive it can allow the relative position between the LD and the external cavity to be determined from the PIN or other detector mounted with the LD. If the LD and external cavity element are placed on opposite edges of two adjacent adaptive optics segments they can provide the basis for a self aligning position sensor; the amount of current detected at the PIN or other detector will depend on the relative displacement between the LD and external coupling element. Schematics of the edge sensors, the basic electronic configuration, and the optics of the external cavity are given. The ratio of the internal cavity intensity, Ic, to the saturation intensity, Is, is plotted as a function of the external cavity coupling. When this ratio approaches one, large-signal output is not a linear function of large-signal output. For operation well below saturation, the PIN detector current is directly related to Ic and may serve as a reliable detector.

  6. Adaptive finite element program for automatic modeling of thermal processes during laser-tissue interaction

    NASA Astrophysics Data System (ADS)

    Yakunin, Alexander N.; Scherbakov, Yury N.

    1994-02-01

    The absence of satisfactory criteria for discrete model parameters choice during computer modeling of thermal processes of laser-biotissue interaction may be the premier sign for the accuracy of the numerical results obtained. The approach realizing the new concept of direct automatical adaptive grid construction is suggested. The intellectual program provides high calculation accuracy and is simple in practical usage so that a physician receives the ability to prescribe treatment without any assistance of a specialist in mathematical modeling.

  7. CRISPR adaptation in Escherichia coli subtypeI-E system.

    PubMed

    Kiro, Ruth; Goren, Moran G; Yosef, Ido; Qimron, Udi

    2013-12-01

    The CRISPRs (clustered regularly interspaced short palindromic repeats) and their associated Cas (CRISPR-associated) proteins are a prokaryotic adaptive defence system against foreign nucleic acids. The CRISPR array comprises short repeats flanking short segments, called 'spacers', which are derived from foreign nucleic acids. The process of spacer insertion into the CRISPR array is termed 'adaptation'. Adaptation allows the system to rapidly evolve against emerging threats. In the present article, we review the most recent studies on the adaptation process, and focus primarily on the subtype I-E CRISPR-Cas system of Escherichia coli.

  8. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    NASA Astrophysics Data System (ADS)

    Due-Hansen, J.; Midtbø, K.; Poppe, E.; Summanwar, A.; Jensen, G. U.; Breivik, L.; Wang, D. T.; Schjølberg-Henriksen, K.

    2012-07-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging.

  9. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    SciTech Connect

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-06-15

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns.

  10. Simultaneous processing of photographic and accelerator array data from sled impact experiment

    NASA Astrophysics Data System (ADS)

    Ash, M. E.

    1982-12-01

    A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.

  11. Low-cost, low-loss microlens arrays fabricated by soft-lithography replication process

    NASA Astrophysics Data System (ADS)

    Kunnavakkam, Madanagopal V.; Houlihan, F. M.; Schlax, M.; Liddle, J. A.; Kolodner, P.; Nalamasu, O.; Rogers, J. A.

    2003-02-01

    This letter describes a soft lithographic approach for fabricating low-cost, low-loss microlens arrays. An accurate negative reproduction (stamp) of an existing high-quality lens surface (master) is made by thermally curing a prepolymer to a silicone elastomer against the master. Fabricating the stamp on a rigid backing plate minimizes distortion of its surface relief. Dispensing a liquid photocurable epoxy loaded to high weight percent with functionalized silica nanoparticles into the features of relief on the mold and then curing this material with UV radiation against a quartz substrate generates a replica lens array. The physical and optical characteristics of the resulting lenses suggest that the approach will be suitable for a range of applications in micro and integrated optics.

  12. AGV trace sensing and processing technology based on RGB color sensor array

    NASA Astrophysics Data System (ADS)

    Xu, Kebao; Zhu, Ping; Wang, Juncheng; Yun, Yuliang

    2009-05-01

    AGV(Automatic Guided Vehicle) is widely used in manufacturing factories, harbors, docks and logistics fields, because of its accurate automatic tracking. An AGV tracking method of detecting trace color based on RGB color sensor is provided here. DR, DG, DB values of trace color are obtained by color sensor, with which hue value denoting trace color characteristic can be calculated. Combined with graph theory algorithm, hue value can be used as a parameter for tracking deviation and branch identification to implement shortest path tracking. In addition, considering discreteness and uncertainty of single sensor in detecting trace information, sensor array is adopted for information fusion to achieve accurate tracking. Compared to tracking trace by single intensity sensor, AGV tracking based on RGB color sensor array has much better trace tracking and branch identification performances on complex roads.

  13. High-performance liquid chromatographic determination with photodiode array detection of ellagic acid in fresh and processed fruits.

    PubMed

    Amakura, Y; Okada, M; Tsuji, S; Tonogai, Y

    2000-10-27

    A high-performance liquid chromatographic (HPLC) procedure based on an isocratic elution with photodiode array detection has been developed for a simple and rapid determination of ellagic acid (EA) in fresh and processed fruits. The homogenized sample was refluxed with methanol and then the extract was refined using a solid-phase cartridge before HPLC. We analyzed EA in 40 kinds of fresh fruits and 11 kinds of processed fruits by the developed method. EA was found in several berries, fueijoa, pineapple and pomegranate. This is the first occurrence of the detection of EA in bayberry, fueijoa and pineapple.

  14. Low Temperature Adaptation Is Not the Opposite Process of High Temperature Adaptation in Terms of Changes in Amino Acid Composition

    PubMed Central

    Yang, Ling-Ling; Tang, Shu-Kun; Huang, Ying; Zhi, Xiao-Yang

    2015-01-01

    Previous studies focused on psychrophilic adaptation generally have demonstrated that multiple mechanisms work together to increase protein flexibility and activity, as well as to decrease the thermostability of proteins. However, the relationship between high and low temperature adaptations remains unclear. To investigate this issue, we collected the available predicted whole proteome sequences of species with different optimal growth temperatures, and analyzed amino acid variations and substitutional asymmetry in pairs of homologous proteins from related species. We found that changes in amino acid composition associated with low temperature adaptation did not exhibit a coherent opposite trend when compared with changes in amino acid composition associated with high temperature adaptation. This result indicates that during their evolutionary histories the proteome-scale evolutionary patterns associated with prokaryotes exposed to low temperature environments were distinct from the proteome-scale evolutionary patterns associated with prokaryotes exposed to high temperature environments in terms of changes in amino acid composition of the proteins. PMID:26614525

  15. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments.

    PubMed

    Vogt, Frank; White, Lauren

    2015-03-31

    Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO2. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae's sequestration capabilities and productivity are also determined by the cells' chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations. Future investigations will utilize these instrumental and chemometric methodologies for quantitative investigations of the relation between chemical environments and microalgal sequestration capabilities. PMID:25813024

  16. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments.

    PubMed

    Vogt, Frank; White, Lauren

    2015-03-31

    Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO2. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae's sequestration capabilities and productivity are also determined by the cells' chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations. Future investigations will utilize these instrumental and chemometric methodologies for quantitative investigations of the relation between chemical environments and microalgal sequestration capabilities.

  17. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  18. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka.

  19. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  20. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.

  1. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  2. Coevolution of information processing and topology in hierarchical adaptive random Boolean networks

    NASA Astrophysics Data System (ADS)

    Górski, Piotr J.; Czaplicka, Agnieszka; Hołyst, Janusz A.

    2016-02-01

    Random Boolean Networks (RBNs) are frequently used for modeling complex systems driven by information processing, e.g. for gene regulatory networks (GRNs). Here we propose a hierarchical adaptive random Boolean Network (HARBN) as a system consisting of distinct adaptive RBNs (ARBNs) - subnetworks - connected by a set of permanent interlinks. We investigate mean node information, mean edge information as well as mean node degree. Information measures and internal subnetworks topology of HARBN coevolve and reach steady-states that are specific for a given network structure. The main natural feature of ARBNs, i.e. their adaptability, is preserved in HARBNs and they evolve towards critical configurations which is documented by power law distributions of network attractor lengths. The mean information processed by a single node or a single link increases with the number of interlinks added to the system. The mean length of network attractors and the mean steady-state connectivity possess minima for certain specific values of the quotient between the density of interlinks and the density of all links in networks. It means that the modular network displays extremal values of its observables when subnetworks are connected with a density a few times lower than a mean density of all links.

  3. ADAPT: building conceptual models of the physical and biological processes across permafrost landscapes

    NASA Astrophysics Data System (ADS)

    Allard, M.; Vincent, W. F.; Lemay, M.

    2012-12-01

    Fundamental and applied permafrost research is called upon in Canada in support of environmental protection, economic development and for contributing to the international efforts in understanding climatic and ecological feedbacks of permafrost thawing under a warming climate. The five year "Arctic Development and Adaptation to Permafrost in Transition" program (ADAPT) funded by NSERC brings together 14 scientists from 10 Canadian universities and involves numerous collaborators from academia, territorial and provincial governments, Inuit communities and industry. The geographical coverage of the program encompasses all of the permafrost regions of Canada. Field research at a series of sites across the country is being coordinated. A common protocol for measuring ground thermal and moisture regime, characterizing terrain conditions (vegetation, topography, surface water regime and soil organic matter contents) is being applied in order to provide inputs for designing a general model to provide an understanding of transfers of energy and matter in permafrost terrain, and the implications for biological and human systems. The ADAPT mission is to produce an 'Integrated Permafrost Systems Science' framework that will be used to help generate sustainable development and adaptation strategies for the North in the context of rapid socio-economic and climate change. ADAPT has three major objectives: to examine how changing precipitation and warming temperatures affect permafrost geosystems and ecosystems, specifically by testing hypotheses concerning the influence of the snowpack, the effects of water as a conveyor of heat, sediments, and carbon in warming permafrost terrain and the processes of permafrost decay; to interact directly with Inuit communities, the public sector and the private sector for development and adaptation to changes in permafrost environments; and to train the new generation of experts and scientists in this critical domain of research in Canada

  4. Improved electromagnetic induction processing with novel adaptive matched filter and matched subspace detection

    NASA Astrophysics Data System (ADS)

    Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.

    2016-05-01

    This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.

  5. Two Adaptation Processes in Auditory Hair Cells Together Can Provide an Active Amplifier

    PubMed Central

    Vilfan, Andrej; Duke, Thomas

    2003-01-01

    The hair cells of the vertebrate inner ear convert mechanical stimuli to electrical signals. Two adaptation mechanisms are known to modify the ionic current flowing through the transduction channels of the hair bundles: a rapid process involves Ca2+ ions binding to the channels; and a slower adaptation is associated with the movement of myosin motors. We present a mathematical model of the hair cell which demonstrates that the combination of these two mechanisms can produce “self-tuned critical oscillations”, i.e., maintain the hair bundle at the threshold of an oscillatory instability. The characteristic frequency depends on the geometry of the bundle and on the Ca2+ dynamics, but is independent of channel kinetics. Poised on the verge of vibrating, the hair bundle acts as an active amplifier. However, if the hair cell is sufficiently perturbed, other dynamical regimes can occur. These include slow relaxation oscillations which resemble the hair bundle motion observed in some experimental preparations. PMID:12829475

  6. Performance-Based Adaptive Fuzzy Tracking Control for Networked Industrial Processes.

    PubMed

    Wang, Tong; Qiu, Jianbin; Yin, Shen; Gao, Huijun; Fan, Jialu; Chai, Tianyou

    2016-08-01

    In this paper, the performance-based control design problem for double-layer networked industrial processes is investigated. At the device layer, the prescribed performance functions are first given to describe the output tracking performance, and then by using backstepping technique, new adaptive fuzzy controllers are designed to guarantee the tracking performance under the effects of input dead-zone and the constraint of prescribed tracking performance functions. At operation layer, by considering the stochastic disturbance, actual index value, target index value, and index prediction simultaneously, an adaptive inverse optimal controller in discrete-time form is designed to optimize the overall performance and stabilize the overall nonlinear system. Finally, a simulation example of continuous stirred tank reactor system is presented to show the effectiveness of the proposed control method.

  7. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants.

    PubMed

    Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M

    2014-01-01

    The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation.

  8. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    PubMed

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  9. General and craniofacial development are complex adaptive processes influenced by diversity.

    PubMed

    Brook, A H; O'Donnell, M Brook; Hone, A; Hart, E; Hughes, T E; Smith, R N; Townsend, G C

    2014-06-01

    Complex systems are present in such diverse areas as social systems, economies, ecosystems and biology and, therefore, are highly relevant to dental research, education and practice. A Complex Adaptive System in biological development is a dynamic process in which, from interacting components at a lower level, higher level phenomena and structures emerge. Diversity makes substantial contributions to the performance of complex adaptive systems. It enhances the robustness of the process, allowing multiple responses to external stimuli as well as internal changes. From diversity comes variation in outcome and the possibility of major change; outliers in the distribution enhance the tipping points. The development of the dentition is a valuable, accessible model with extensive and reliable databases for investigating the role of complex adaptive systems in craniofacial and general development. The general characteristics of such systems are seen during tooth development: self-organization; bottom-up emergence; multitasking; self-adaptation; variation; tipping points; critical phases; and robustness. Dental findings are compatible with the Random Network Model, the Threshold Model and also with the Scale Free Network Model which has a Power Law distribution. In addition, dental development shows the characteristics of Modularity and Clustering to form Hierarchical Networks. The interactions between the genes (nodes) demonstrate Small World phenomena, Subgraph Motifs and Gene Regulatory Networks. Genetic mechanisms are involved in the creation and evolution of variation during development. The genetic factors interact with epigenetic and environmental factors at the molecular level and form complex networks within the cells. From these interactions emerge the higher level tissues, tooth germs and mineralized teeth. Approaching development in this way allows investigation of why there can be variations in phenotypes from identical genotypes; the phenotype is the outcome

  10. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system

    PubMed Central

    Schrode, Katrina M.; Bee, Mark A.

    2015-01-01

    ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  11. Maternal migration and child health: An analysis of disruption and adaptation processes in Benin

    PubMed Central

    Smith-Greenaway, Emily; Madhavan, Sangeetha

    2016-01-01

    Children of migrant mothers have lower vaccination rates compared to their peers with non-migrant mothers in low-income countries. Explanations for this finding are typically grounded in the disruption and adaptation perspectives of migration. Researchers argue that migration is a disruptive process that interferes with women’s economic well-being and social networks, and ultimately their health-seeking behaviors. With time, however, migrant women adapt to their new settings, and their health behaviors improve. Despite prominence in the literature, no research tests the salience of these perspectives to the relationship between maternal migration and child vaccination. We innovatively leverage Demographic and Health Survey data to test the extent to which disruption and adaptation processes underlie the relationship between maternal migration and child vaccination in the context of Benin—a West African country where migration is common and child vaccination rates have declined in recent years. By disaggregating children of migrants according to whether they were born before or after their mother’s migration, we confirm that migration does not lower children’s vaccination rates in Benin. In fact, children born after migration enjoy a higher likelihood of vaccination, whereas their peers born in the community from which their mother eventually migrates are less likely to be vaccinated. Although we find no support for the disruption perspective of migration, we do find evidence of adaptation: children born after migration have an increased likelihood of vaccination the longer their mother resides in the destination community prior to their birth. PMID:26463540

  12. General and craniofacial development are complex adaptive processes influenced by diversity.

    PubMed

    Brook, A H; O'Donnell, M Brook; Hone, A; Hart, E; Hughes, T E; Smith, R N; Townsend, G C

    2014-06-01

    Complex systems are present in such diverse areas as social systems, economies, ecosystems and biology and, therefore, are highly relevant to dental research, education and practice. A Complex Adaptive System in biological development is a dynamic process in which, from interacting components at a lower level, higher level phenomena and structures emerge. Diversity makes substantial contributions to the performance of complex adaptive systems. It enhances the robustness of the process, allowing multiple responses to external stimuli as well as internal changes. From diversity comes variation in outcome and the possibility of major change; outliers in the distribution enhance the tipping points. The development of the dentition is a valuable, accessible model with extensive and reliable databases for investigating the role of complex adaptive systems in craniofacial and general development. The general characteristics of such systems are seen during tooth development: self-organization; bottom-up emergence; multitasking; self-adaptation; variation; tipping points; critical phases; and robustness. Dental findings are compatible with the Random Network Model, the Threshold Model and also with the Scale Free Network Model which has a Power Law distribution. In addition, dental development shows the characteristics of Modularity and Clustering to form Hierarchical Networks. The interactions between the genes (nodes) demonstrate Small World phenomena, Subgraph Motifs and Gene Regulatory Networks. Genetic mechanisms are involved in the creation and evolution of variation during development. The genetic factors interact with epigenetic and environmental factors at the molecular level and form complex networks within the cells. From these interactions emerge the higher level tissues, tooth germs and mineralized teeth. Approaching development in this way allows investigation of why there can be variations in phenotypes from identical genotypes; the phenotype is the outcome

  13. Real-time processing of fast-scan cyclic voltammetry (FSCV) data using a field-programmable gate array (FPGA).

    PubMed

    Bozorgzadeh, Bardia; Covey, Daniel P; Heidenreich, Byron A; Garris, Paul A; Mohseni, Pedram

    2014-01-01

    This paper reports the hardware implementation of a digital signal processing (DSP) unit for real-time processing of data obtained by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM), an electrochemical transduction technique for high-resolution monitoring of brain neurochemistry. Implemented on a field-programmable gate array (FPGA), the DSP unit comprises a decimation filter and an embedded processor to process the oversampled FSCV data and obtain in real time a temporal profile of concentration variation along with a chemical signature to identify the target neurotransmitter. Interfaced with an integrated, FSCV-sensing front-end, the DSP unit can successfully process FSCV data obtained by bolus injection of dopamine in a flow cell as well as electrically evoked, transient dopamine release in the dorsal striatum of an anesthetized rat. PMID:25570384

  14. Real-time processing of fast-scan cyclic voltammetry (FSCV) data using a field-programmable gate array (FPGA).

    PubMed

    Bozorgzadeh, Bardia; Covey, Daniel P; Heidenreich, Byron A; Garris, Paul A; Mohseni, Pedram

    2014-01-01

    This paper reports the hardware implementation of a digital signal processing (DSP) unit for real-time processing of data obtained by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM), an electrochemical transduction technique for high-resolution monitoring of brain neurochemistry. Implemented on a field-programmable gate array (FPGA), the DSP unit comprises a decimation filter and an embedded processor to process the oversampled FSCV data and obtain in real time a temporal profile of concentration variation along with a chemical signature to identify the target neurotransmitter. Interfaced with an integrated, FSCV-sensing front-end, the DSP unit can successfully process FSCV data obtained by bolus injection of dopamine in a flow cell as well as electrically evoked, transient dopamine release in the dorsal striatum of an anesthetized rat.

  15. An application of space-time adaptive processing to airborne and spaceborne monostatic and bistatic radar systems

    NASA Astrophysics Data System (ADS)

    Czernik, Richard James

    A challenging problem faced by Ground Moving Target Indicator (GMTI) radars on both airborne and spaceborne platforms is the ability to detect slow moving targets due the presence of non-stationary and heterogeneous ground clutter returns. Space-Time Adaptive Processing techniques process both the spatial signals from an antenna array as well as radar pulses simultaneously to aid in mitigating this clutter which has an inherent Doppler shift due to radar platform motion, as well as spreading across Angle-Doppler space attributable to a variety of factors. Additional problems such as clutter aliasing, widening of the clutter notch, and range dependency add additional complexity when the radar is bistatic in nature, and vary significantly as the bistatic radar geometry changes with respect to the targeted location. The most difficult situation is that of a spaceborne radar system due to its high velocity and altitude with respect to the earth. A spaceborne system does however offer several advantages over an airborne system, such as the ability to cover wide areas and to provide access to areas denied to airborne platforms. This dissertation examines both monostatic and bistatic radar performance based upon a computer simulation developed by the author, and explores the use of both optimal STAP and reduced dimension STAP architectures to mitigate the modeled clutter returns. Factors such as broadband jamming, wind, and earth rotation are considered, along with their impact on the interference covariance matrix, constructed from sample training data. Calculation of the covariance matrix in near real time based upon extracted training data is computer processor intensive and reduced dimension STAP architectures relieve some of the computation burden. The problems resulting from extending both monostatic and bistatic radar systems to space are also simulated and studied.

  16. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development

  17. Can survival processing enhance story memory? Testing the generalizability of the adaptive memory framework.

    PubMed

    Seamon, John G; Bohn, Justin M; Coddington, Inslee E; Ebling, Maritza C; Grund, Ethan M; Haring, Catherine T; Jang, Sue-Jung; Kim, Daniel; Liong, Christopher; Paley, Frances M; Pang, Luke K; Siddique, Ashik H

    2012-07-01

    Research from the adaptive memory framework shows that thinking about words in terms of their survival value in an incidental learning task enhances their free recall relative to other semantic encoding strategies and intentional learning (Nairne, Pandeirada, & Thompson, 2008). We found similar results. When participants used incidental survival encoding for a list of words (e.g., "Will this object enhance my survival if I were stranded in the grasslands of a foreign land?"), they produced better free recall on a surprise test than did participants who intentionally tried to remember those words (Experiment 1). We also found this survival processing advantage when the words were presented within the context of a survival or neutral story (Experiment 2). However, this advantage did not extent to memory for a story's factual content, regardless of whether the participants were tested by cued recall (Experiment 3) or free recall (Experiments 4-5). Listening to a story for understanding under intentional or incidental learning conditions was just as good as survival processing for remembering story content. The functionalist approach to thinking about memory as an evolutionary adaptation designed to solve reproductive fitness problems provides a different theoretical framework for research, but it is not yet clear if survival processing has general applicability or is effective only for processing discrete stimuli in terms of fitness-relevant scenarios from our past. PMID:22288816

  18. Light absorption processes and optimization of ZnO/CdTe core-shell nanowire arrays for nanostructured solar cells.

    PubMed

    Michallon, Jérôme; Bucci, Davide; Morand, Alain; Zanuccoli, Mauro; Consonni, Vincent; Kaminski-Cachopo, Anne

    2015-02-20

    The absorption processes of extremely thin absorber solar cells based on ZnO/CdTe core-shell nanowire (NW) arrays with square, hexagonal or triangular arrangements are investigated through systematic computations of the ideal short-circuit current density using three-dimensional rigorous coupled wave analysis. The geometrical dimensions are optimized for optically designing these solar cells: the optimal NW diameter, height and array period are of 200 ± 10 nm, 1-3 μm and 350-400 nm for the square arrangement with CdTe shell thickness of 40-60 nm. The effects of the CdTe shell thickness on the absorption of ZnO/CdTe NW arrays are revealed through the study of two optical key modes: the first one is confining the light into individual NWs, the second one is strongly interacting with the NW arrangement. It is also shown that the reflectivity of the substrate can improve Fabry-Perot resonances within the NWs: the ideal short-circuit current density is increased by 10% for the ZnO/fluorine-doped tin oxide (FTO)/ideal reflector as compared to the ZnO/FTO/glass substrate. Furthermore, the optimized square arrangement absorbs light more efficiently than both optimized hexagonal and triangular arrangements. Eventually, the enhancement factor of the ideal short-circuit current density is calculated as high as 1.72 with respect to planar layers, showing the high optical potentiality of ZnO/CdTe core-shell NW arrays. PMID:25629373

  19. Integration of a detector array with an optical waveguide structure and applications to signal processing

    NASA Astrophysics Data System (ADS)

    Boyd, J. T.; Ramey, D. A.; Chen, C. L.; Naumaan, A.; Dutta, S.

    1981-08-01

    Both planar thin film and channel optical waveguides have been integrated with charge-coupled devices (CCDs). Coupling of light from the waveguide region to the detector elements utilizes a smooth and uniformly-tapered region of SiO2 to minimize scattering. CCd transfer inefficiency of 1.0 times ten to the minus fourth power is consistently obtained for a number of devices. A channel waveguide array formed in a fan-out pattern is introduced as a means of enhancing focal plane resolution in integrated optical devices using optical waveguide lenses. High spatial resolution can thus be obtained without making detector spacings too small, thus avoiding detector problems with regard to fabrication, crosstalk, linearity, and charge transfer inefficiency. Operation of an integrated optical channel waveguide array-CCD transversal filter is reported. Channel waveguides formed in V-grooves couple directly to the sensor elements of the four phase, double polysilicon CCD. Experimental results include a filter transfer function having good agreement with theoretical results. The voltage contrast mode of a scanning electron microscope (SEM) is utilized to observe charge-coupled devices (CCDs) which have been cross sectioned. A new cross sectioning technique which uses anisotropic etching to accurately define the axis along which fracture occurs is presented.

  20. Phonon processes in vertically aligned silicon nanowire arrays produced by low-cost all-solution galvanic displacement method

    NASA Astrophysics Data System (ADS)

    Banerjee, Debika; Trudeau, Charles; Gerlein, Luis Felipe; Cloutier, Sylvain G.

    2016-03-01

    The nanoscale engineering of silicon can significantly change its bulk optoelectronic properties to make it more favorable for device integration. Phonon process engineering is one way to enhance inter-band transitions in silicon's indirect band structure alignment. This paper demonstrates phonon localization at the tip of silicon nanowires fabricated by galvanic displacement using wet electroless chemical etching of a bulk silicon wafer. High-resolution Raman micro-spectroscopy reveals that such arrayed structures of silicon nanowires display phonon localization behaviors, which could help their integration into the future generations of nano-engineered silicon nanowire-based devices such as photodetectors and solar cells.

  1. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to

  2. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  3. Adapting Semantic Natural Language Processing Technology to Address Information Overload in Influenza Epidemic Management

    PubMed Central

    Keselman, Alla; Rosemblat, Graciela; Kilicoglu, Halil; Fiszman, Marcelo; Jin, Honglan; Shin, Dongwook; Rindflesch, Thomas C.

    2013-01-01

    Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

  4. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    SciTech Connect

    Sen, Satyabrata

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  5. Functional identification of biological neural networks using reservoir adaptation for point processes.

    PubMed

    Gürel, Tayfun; Rotter, Stefan; Egert, Ulrich

    2010-08-01

    The complexity of biological neural networks does not allow to directly relate their biophysical properties to the dynamics of their electrical activity. We present a reservoir computing approach for functionally identifying a biological neural network, i.e. for building an artificial system that is functionally equivalent to the reference biological network. Employing feed-forward and recurrent networks with fading memory, i.e. reservoirs, we propose a point process based learning algorithm to train the internal parameters of the reservoir and the connectivity between the reservoir and the memoryless readout neurons. Specifically, the model is an Echo State Network (ESN) with leaky integrator neurons, whose individual leakage time constants are also adapted. The proposed ESN algorithm learns a predictive model of stimulus-response relations in in vitro and simulated networks, i.e. it models their response dynamics. Receiver Operating Characteristic (ROC) curve analysis indicates that these ESNs can imitate the response signal of a reference biological network. Reservoir adaptation improved the performance of an ESN over readout-only training methods in many cases. This also held for adaptive feed-forward reservoirs, which had no recurrent dynamics. We demonstrate the predictive power of these ESNs on various tasks with cultured and simulated biological neural networks.

  6. Serum testosterone levels and excessive erythrocytosis during the process of adaptation to high altitudes

    PubMed Central

    Gonzales, Gustavo F

    2013-01-01

    Populations living at high altitudes (HAs), particularly in the Peruvian Andes, are characterized by a mixture of subjects with erythrocytosis (16 g dl−121 g dl−1). Elevated haemoglobin values (EE) are associated with chronic mountain sickness, a condition reflecting the lack of adaptation to HA. According to current data, native men from regions of HA are not adequately adapted to live at such altitudes if they have elevated serum testosterone levels. This seems to be due to an increased conversion of dehydroepiandrosterone sulphate (DHEAS) to testosterone. Men with erythrocytosis at HAs show higher serum androstenedione levels and a lower testosterone/androstenedione ratio than men with EE, suggesting reduced 17beta-hydroxysteroid dehydrogenase (17beta-HSD) activity. Lower 17beta-HSD activity via Δ4-steroid production in men with erythrocytosis at HA may protect against elevated serum testosterone levels, thus preventing EE. The higher conversion of DHEAS to testosterone in subjects with EE indicates increased 17beta-HSD activity via the Δ5-pathway. Currently, there are various situations in which people live (human biodiversity) with low or high haemoglobin levels at HA. Antiquity could be an important adaptation component for life at HA, and testosterone seems to participate in this process. PMID:23524530

  7. Simple process for building large homogeneous adaptable retarders made from polymeric materials.

    PubMed

    Delplancke, F; Sendrowicz, H; Bernaerd, R; Ebbeni, J

    1995-06-01

    A process for building large, homogeneous, adaptable retarders easily and at low cost is proposed and analyzed. This method is based on the properties of high polymers to present variable birefringence as a function of applied stresses and on the possibility of freezing these stresses inside the material by a thermal process. Various geometries for the applied forces make obtaining a large range of birefringence profiles possible. In the process that we describe composed bending leads to a linear birefringence profile. The superimposition of two pieces with identical profiles with opposite directions gives homogeneous constant retardation. This retardation can be adjusted by a relative displacement between the pieces. A precision of better than 1% over large areas (more than 3 cm in diameter) for a quarter-wave value has been obtained. The correct choice of material makes many applications possible with a large range of wavelengths.

  8. Adaptive Classification of Landscape Process and Function: An Integration of Geoinformatics and Self-Organizing Maps

    SciTech Connect

    Coleman, Andre M.

    2009-07-17

    The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.

  9. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  10. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  11. Adapted waveform analysis, wavelet packets, and local cosine libraries as a tool for image processing

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1995-09-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.

  12. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications.

    PubMed

    Wlodarczyk, Krystian L; Bryce, Emma; Schwartz, Noah; Strachan, Mel; Hutson, David; Maier, Robert R J; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Kirk, Katherine; Hand, Duncan P

    2014-02-01

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  13. Primary Dendrite Array: Observations from Ground-Based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Grugel, Richard N.; Erdman, Robert G.; Poirier, David R.

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST). Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K per centimeter (MICAST6) and 28 K per centimeter (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 millimeters per second) and a speed decrease (MICAST7-from 20 to 10 millimeters per second). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  14. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  15. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    SciTech Connect

    Wlodarczyk, Krystian L. Maier, Robert R. J.; Hand, Duncan P.; Bryce, Emma; Hutson, David; Kirk, Katherine; Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Strachan, Mel

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  16. From spin noise to systematics: stochastic processes in the first International Pulsar Timing Array data release

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.

    2016-05-01

    We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.

  17. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  18. Sequential growth of zinc oxide nanorod arrays at room temperature via a corrosion process: application in visible light photocatalysis.

    PubMed

    Iqbal, Danish; Kostka, Aleksander; Bashir, Asif; Sarfraz, Adnan; Chen, Ying; Wieck, Andreas D; Erbe, Andreas

    2014-11-12

    Many photocatalyst systems catalyze chemical reactions under ultraviolet (UV) illumination, because of its high photon energies. Activating inexpensive, widely available materials as photocatalyst using the intense visible part of the solar spectrum is more challenging. Here, nanorod arrays of the wide-band-gap semiconductor zinc oxide have been shown to act as photocatalysts for the aerobic photo-oxidation of organic dye Methyl Orange under illumination with red light, which is normally accessible only to narrow-band semiconductors. The homogeneous, 800-1000-nm-thick ZnO nanorod arrays show substantial light absorption (absorbances >1) throughout the visible spectral range. This absorption is caused by defect levels inside the band gap. Multiple scattering processes by the rods make the nanorods appear black. The dominantly crystalline ZnO nanorod structures grow in the (0001) direction, i.e., with the c-axis perpendicular to the surface of polycrystalline zinc. The room-temperature preparation route relies on controlled cathodic delamination of a weakly bound polymer coating from metallic zinc, an industrially produced and cheaply available substrate. Cathodic delamination is a sequential synthesis process, because it involves the propagation of a delamination front over the base material. Consequently, arbitrarily large sample surfaces can be nanostructured using this approach.

  19. Established and Adapted Diagnostic Tools for Investigation of a Special Twin-Wire Arc Spraying Process

    NASA Astrophysics Data System (ADS)

    König, Johannes; Lahres, Michael; Zimmermann, Stephan; Schein, Jochen

    2016-10-01

    In the LDS® ( Lichtbogendrahtspritzen) process, a twin-wire arc spraying (TWAS) process developed by Daimler AG, the gas injection and feed to the arc play a crucial role in separating the molten particles from the wire ends. This paper describes an investigation of the gas and particle behavior according to individual LDS® process parameters. Coating problems are not considered. The measurements are separated into two different parts: "cold" (without arc and particles) and "hot" (with arc and particles). The results provide the first detailed understanding of the effect of different LDS® process parameters. A correlation between the gas parameter settings and the particle beam properties was found. Using established and adapted diagnostic tools, as also applied for conventional TWAS processes, this special LDS® process was investigated and the results (gas and particle behavior) validated, thereby allowing explanation and comparison of the diagnostic methods, which is the main focus of this paper. Based on error analysis, individual instabilities, limits, and deviations during the gas determinations and particle measurements are explained in more detail. The paper concludes with presentation of the first particle-shadow diagnostic results and main statements regarding these investigations.

  20. Established and Adapted Diagnostic Tools for Investigation of a Special Twin-Wire Arc Spraying Process

    NASA Astrophysics Data System (ADS)

    König, Johannes; Lahres, Michael; Zimmermann, Stephan; Schein, Jochen

    2016-09-01

    In the LDS® (Lichtbogendrahtspritzen) process, a twin-wire arc spraying (TWAS) process developed by Daimler AG, the gas injection and feed to the arc play a crucial role in separating the molten particles from the wire ends. This paper describes an investigation of the gas and particle behavior according to individual LDS® process parameters. Coating problems are not considered. The measurements are separated into two different parts: "cold" (without arc and particles) and "hot" (with arc and particles). The results provide the first detailed understanding of the effect of different LDS® process parameters. A correlation between the gas parameter settings and the particle beam properties was found. Using established and adapted diagnostic tools, as also applied for conventional TWAS processes, this special LDS® process was investigated and the results (gas and particle behavior) validated, thereby allowing explanation and comparison of the diagnostic methods, which is the main focus of this paper. Based on error analysis, individual instabilities, limits, and deviations during the gas determinations and particle measurements are explained in more detail. The paper concludes with presentation of the first particle-shadow diagnostic results and main statements regarding these investigations.

  1. [Dynamics of adaptation processes and morbidity risk for the popupation of the territory of industrial cities].

    PubMed

    Prusakov, V M; Prusakova, A V

    2014-01-01

    There was investigated the character of the adaptation processes in the population residing in conditions ofprolonged exposure to environmental pollution in the territory of the industrial cities of the Irkutsk region in order to identify the possible periodicity of their manifestations in the formation of the morbidity risk for the population of different age groups. Under conditions of prolonged exposure to air pollution and other adverse unfavorable factors of industrial cities in the population of all age groups long cyclic changes of adaptation processes in the body in the form of repeated 11-15-years cycles in which a period of relative destabilization of physiological functions with lowered resistance is replaced by the period with the state of elevated nonspecific resistance were established to be observed. Undulating changes of the dynamics of the relative risks of general morbidity should be considered in the assessment of the medical and environmental situation in the territory and making the managing decisions at the base on the data of public health monitoring.

  2. [Super sweet corn hybrid sh2 adaptability for industrial canning process].

    PubMed

    Ortiz de Bertorelli, Ligia; De Venanzi, Frank; Alfonzo, Braunnier; Camacho, Candelario

    2002-12-01

    The super sweet corns Krispy king, Victor and 324 (sh2 hybrids) were evaluated to determine their adaptabilities to the industrial canning process as whole kernels. All these hybrids and Bonanza (control) were sown in San Joaquín (Carabobo, Venezuela), harvested and canned. After 110 days storage at room temperature they were analyzed to be compared physically, chemically and sensorially with Bonanza hybrid. Results did not show significant differences among most of the physical characteristics, except for percentage of broken kernels which was higher in 324 hybrid. Chemical parameters showed significant differences (P < 0.05) comparing each super sweet hybrid with Bonanza. The super sweet hybrids presented a higher sugar content and soluble solid of the brine than Bonanza, also a lower pH. The super sweet whole kernel presented a lower soluble solids content than Bonanza but they were not significant (Krispy king and 324). Appearance, odor and overall quality were the same for super sweet hybrids and Bonanza (su). Color, flavor and sweetness were better for 324 than all the other hybrids. Super sweet hybrids presented a very good adaptation to the canning process, having as an advantage that doesn't require sugar addition in the brine and a very good texture (firm and crispy). PMID:12868279

  3. Workload-Matched Adaptive Automation Support of Air Traffic Controller Information Processing Stages

    NASA Technical Reports Server (NTRS)

    Kaber, David B.; Prinzel, Lawrence J., III; Wright, Melanie C.; Clamann, Michael P.

    2002-01-01

    Adaptive automation (AA) has been explored as a solution to the problems associated with human-automation interaction in supervisory control environments. However, research has focused on the performance effects of dynamic control allocations of early stage sensory and information acquisition functions. The present research compares the effects of AA to the entire range of information processing stages of human operators, such as air traffic controllers. The results provide evidence that the effectiveness of AA is dependent on the stage of task performance (human-machine system information processing) that is flexibly automated. The results suggest that humans are better able to adapt to AA when applied to lower-level sensory and psychomotor functions, such as information acquisition and action implementation, as compared to AA applied to cognitive (analysis and decision-making) tasks. The results also provide support for the use of AA, as compared to completely manual control. These results are discussed in terms of implications for AA design for aviation.

  4. Focal plane array with modular pixel array components for scalability

    DOEpatents

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  5. Geochemical diversity in S processes mediated by culture-adapted and environmental-enrichments of Acidithiobacillus spp.

    NASA Astrophysics Data System (ADS)

    Bernier, Luc; Warren, Lesley A.

    2007-12-01

    Coupled S speciation and acid generation resulting from S processing associated with five different microbial treatments, all primarily Acidithiobacillus spp. (i.e. autotrophic S-oxidizers) were evaluated in batch laboratory experiments. Microbial treatments included two culture-adapted strains, Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans, their consortia and two environmental enrichments from a mine tailings lake that were determined to be >95% Acidithiobacillus spp., by whole-cell fluorescent hybridization. Using batch experiments simulating acidic mine waters with no carbon amendments, acid generation, and S speciation associated with the oxidation of three S substrates (thiosulfate, tetrathionate, and elemental S) were evaluated. Aseptic controls showed no observable pH decrease over the experimental time course (1 month) for all three S compounds examined. In contrast, pH decreased in all microbial treatments from starting pH values of 4 to 2 or less for all three S substrates. Results show a non-linear relationship between the pH dynamics of the batch cultures and their corresponding sulfate concentrations, and indicate how known microbial S processing pathways have opposite impacts, ultimately on pH dynamics. Associated geochemical modeling indicated negligible abiogenic processes contributing to the observed results, indicating strong microbial control of acid generation extending over pH ranges from 4 to less than 2. However, the observed acid generation rates and associated S speciation were both microbial treatment and substrate-specific. Results reveal a number of novel insights regarding microbial catalysis of S oxidation: (1) metabolic diversity in S processing, as evidenced by the observed geochemical signatures in S chemical speciation and rates of acid generation amongst phylogenetically similar organisms (to the genus level); (2) consortial impacts differ from those of individual strain members; (3) environmental enrichments

  6. Processing of pulse oximeter signals using adaptive filtering and autocorrelation to isolate perfusion and oxygenation components

    NASA Astrophysics Data System (ADS)

    Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.

    2005-03-01

    A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.

  7. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  8. Rethinking infant knowledge: toward an adaptive process account of successes and failures in object permanence tasks.

    PubMed

    Munakata, Y; McClelland, J L; Johnson, M H; Siegler, R S

    1997-10-01

    Infants seem sensitive to hidden objects in habituation tasks at 3.5 months but fail to retrieve hidden objects until 8 months. The authors first consider principle-based accounts of these successes and failures, in which early successes imply knowledge of principles and failures are attributed to ancillary deficits. One account is that infants younger than 8 months have the object permanence principle but lack means-ends abilities. To test this, 7-month-olds were trained on means-ends behaviors and were tested on retrieval of visible and occluded toys. Means-ends demands were the same, yet infants made more toy-guided retrievals in the visible case. The authors offer an adaptive process account in which knowledge is graded and embedded in specific behavioral processes. Simulation models that learn gradually to represent occluded objects show how this approach can account for success and failure in object permanence tasks without assuming principles and ancillary deficits.

  9. Current Tracking Control of Voltage Source PWM Inverters Using Adaptive Digital Signal Processing

    NASA Astrophysics Data System (ADS)

    Fukuda, Shoji; Furukawa, Yuya

    An active filter (AF) is required to have a high control capability of tracking a time-varying current reference. However, a steady-state current error always exists if a conventional proportional and integral (PI) regulator is used because the current reference varies in time. This paper proposes the application of adaptive digital signal processing (ADSP) to the current control of voltage source PWM inverters. ADSP does not require any additional hardware. It can automatically minimize the mean square-error. Since the processing time available by a computer is limited, ADSP cannot eliminate higher order harmonics but can eliminate lower order harmonics such as 5th to 17th. Experimental results demonstrate that ADSP is useful for improving the reference tracking performance of voltage source inverters.

  10. Infrared Astronomy with Arrays: The Next Generation; Sunset Village, Los Angeles, CA, Oct. 1993

    NASA Technical Reports Server (NTRS)

    Mclean, Ian S.

    1994-01-01

    Conference papers on infrared array techniques and methods for infrared astronomy are presented. Topics covered include the following: infrared telescopes; infrared spectrometers; spaceborne astronomy; astronomical observatories; infrared cameras; imaging techniques; sky surveys; infrared photography; infrared photometry; infrared spectroscopy; equipment specifications; data processing and analysis; control systems; cryogenic equipment; adaptive optics; image resolution; infrared detector materials; and focal plane arrays.

  11. Elastomeric inverse moulding and vacuum casting process characterization for the fabrication of arrays of concave refractive microlenses

    NASA Astrophysics Data System (ADS)

    Desmet, L.; Van Overmeire, S.; Van Erps, J.; Ottevaere, H.; Debaes, C.; Thienpont, H.

    2007-01-01

    We present a complete and precise quantitative characterization of the different process steps used in an elastomeric inverse moulding and vacuum casting technique. We use the latter replication technique to fabricate concave replicas from an array of convex thermal reflow microlenses. During the inverse elastomeric moulding we obtain a secondary silicone mould of the original silicone mould in which the master component is embedded. Using vacuum casting, we are then able to cast out of the second mould several optical transparent poly-urethane arrays of concave refractive microlenses. We select ten particular representative microlenses on the original, the silicone moulds and replica sample and quantitatively characterize and statistically compare them during the various fabrication steps. For this purpose, we use several state-of-the-art and ultra-precise characterization tools such as a stereo microscope, a stylus surface profilometer, a non-contact optical profilometer, a Mach-Zehnder interferometer, a Twyman-Green interferometer and an atomic force microscope to compare various microlens parameters such as the lens height, the diameter, the paraxial focal length, the radius of curvature, the Strehl ratio, the peak-to-valley and the root-mean-square wave aberrations and the surface roughness. When appropriate, the microlens parameter under test is measured with several different measuring tools to check for consistency in the measurement data. Although none of the lens samples shows diffraction-limited performance, we prove that the obtained replicated arrays of concave microlenses exhibit sufficiently low surface roughness and sufficiently high lens quality for various imaging applications.

  12. Combining molecular evolution and environmental genomics to unravel adaptive processes of MHC class IIB diversity in European minnows (Phoxinus phoxinus)

    PubMed Central

    Collin, Helene; Burri, Reto; Comtesse, Fabien; Fumagalli, Luca

    2013-01-01

    Abstract Host–pathogen interactions are a major evolutionary force promoting local adaptation. Genes of the major histocompatibility complex (MHC) represent unique candidates to investigate evolutionary processes driving local adaptation to parasite communities. The present study aimed at identifying the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of European minnows (Phoxinus phoxinus). To this end, we isolated and genotyped exon 2 of two MHCIIB gene duplicates (DAB1 and DAB3) and 1′665 amplified fragment length polymorphism (AFLP) markers in nine populations, and characterized local bacterial communities by 16S rDNA barcoding using 454 amplicon sequencing. Both MHCIIB loci exhibited signs of historical balancing selection. Whereas genetic differentiation exceeded that of neutral markers at both loci, the populations' genetic diversities were positively correlated with local pathogen diversities only at DAB3. Overall, our results suggest pathogen-mediated local adaptation in European minnows at both MHCIIB loci. While at DAB1 selection appears to favor different alleles among populations, this is only partially the case in DAB3, which appears to be locally adapted to pathogen communities in terms of genetic diversity. These results provide new insights into the importance of host–pathogen interactions in driving local adaptation in the European minnow, and highlight that the importance of adaptive processes driving MHCIIB gene evolution may differ among duplicates within species, presumably as a consequence of alternative selective regimes or different genomic context. Using next-generation sequencing, the present manuscript identifies the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of a cyprinid fish: the European minnow (Phoxinus phoxinus). We highlight that the relative importance of neutral

  13. Global Arrays

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the sharedmore » data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).« less

  14. Human Topological Task Adapted for Rats: Spatial Information Processes of the Parietal Cortex

    PubMed Central

    Goodrich-Hunsaker, Naomi J.; Howard, Brian P.; Hunsaker, Michael R.; Kesner, Raymond P.

    2008-01-01

    Human research has shown that lesions of the parietal cortex disrupt spatial information processing, specifically topological information. Similar findings have been found in nonhumans. It has been difficult to determine homologies between human and non-human mnemonic mechanisms for spatial information processing because methodologies and neuropathology differ. The first objective of the present study was to adapt a previously established human task for rats. The second objective was to better characterize the role of parietal cortex (PC) and dorsal hippocampus (dHPC) for topological spatial information processing. Rats had to distinguish whether a ball inside a ring or a ball outside a ring was the correct, rewarded object. After rats reached criterion on the task (>95%) they were randomly assigned to a lesion group (control, PC, dHPC). Animals were then re-tested. Post-surgery data show that controls were 94% correct on average, dHPC rats were 89% correct on average, and PC rats were 56% correct on average. The results from the present study suggest that the parietal cortex, but not the dHPC processes topological spatial information. The present data are the first to support comparable topological spatial information processes of the parietal cortex in humans and rats. PMID:18571941

  15. Investigation of proposed process sequence for the array automated assembly task, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-01-01

    Progress was made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The processes that were retained for the selected process sequence, spin-on diffusion, print and fire aluminum p+ back, clean, print and fire silver front contact and apply tin pad to aluminum back, were evaluated for their cost contribution.

  16. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  17. The Process of Adaptation of a Community-Level, Evidence-Based Intervention for HIV-Positive African American Men Who Have Sex with Men in Two Cities

    ERIC Educational Resources Information Center

    Robinson, Beatrice E.; Galbraith, Jennifer S.; Lund, Sharon M.; Hamilton, Autumn R.; Shankle, Michael D.

    2012-01-01

    We describe the process of adapting a community-level, evidence-based behavioral intervention (EBI), Community PROMISE, for HIV-positive African American men who have sex with men (AAMSM). The Centers for Disease Control and Prevention (CDC) Map of the Adaptation Process (MAP) guided the adaptation process for this new target population by two…

  18. Creating and maintaining dialogue on climate information - Reflections on the adaptation process in Sweden

    NASA Astrophysics Data System (ADS)

    Nilsson, C.

    2010-09-01

    Climate information can be communicated in many ways to various actors within society. Dialogue as a means to communicate is often emphasised as an important tool to deliver and receive information. However, a dialogue can be initiated and maintained in many different ways. In Sweden the Swedish Meteorological and Hydrological Institute, SMHI, can be seen as one of the actors in between the climate science experts and the society, and together with the Swedish Geotechnical Institute, SGI, they had recognised that the County Administrative Boards required climate information and knowledge in how to use and how not to use climate scenarios in order to start coordinating regional adaptation activities. At the same time the National Authorities called for a more detailed compilation on the needs in the counties, regarding climate information for decision making. Together the SMHI and SGI visited the 21 counties in Sweden with a start in the autumn of 2008, initiating a dialogue with the County Administrative Boards. The process continued with a first seminar in springtime 2009, on adaptation. In spring 2009 the County Administrative Boards were appointed as the formal foci of adaptation coordination in Sweden, and a new phase in the dialogue started, when the counties had specific goals with the interaction. Personal visits and seminars have continued to be the arena for the dialogue, and new climate information products have been initiated as a response to the interaction. Here, reflections are presented on the role of the dialogue in Sweden as a tool towards a sustainable way to communicate climate information.

  19. Neuroelectric adaptations to cognitive processing in virtual environments: an exercise-related approach.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-04-01

    Recently, virtual environments (VEs) are suggested to encourage users to exercise regularly. The benefits of chronic exercise on cognitive performance are well documented in non-VE neurophysiological and behavioural studies. Based on event-related potentials (ERP) such as the N200 and P300, cognitive processing may be interpreted on a neuronal level. However, exercise-related neuroelectric adaptation in VE remains widely unclear and thus characterizes the primary aim of the present study. Twenty-two healthy participants performed active (moderate cycling exercise) and passive (no exercise) sessions in three VEs (control, front, surround), each generating a different sense of presence. Within sessions, conditions were randomly assigned, each lasting 5 min and including a choice reaction-time task to assess cognitive performance. According to the international 10:20 system, EEG with real-time triggered stimulus onset was recorded, and peaks of N200 and P300 components (amplitude, latency) were exported for analysis. Heart rate was recorded, and sense of presence assessed prior to and following each session and condition. Results revealed an increase in ERP amplitudes (N200: p < 0.001; P300: p < 0.001) and latencies (N200: p < 0.001) that were most pronounced over fronto-central and occipital electrode sites relative to an increased sense of presence (p < 0.001); however, ERP were not modulated by exercise (each p > 0.05). Hypothesized to mirror cognitive processing, decreases of cognitive performance's accuracy and reaction time failed significance. With respect to previous research, the present neuroelectric adaptation gives reason to believe in compensative neuronal resources that balance demanding cognitive processing in VE to avoid behavioural inefficiency. PMID:25630906

  20. Neuroelectric adaptations to cognitive processing in virtual environments: an exercise-related approach.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-04-01

    Recently, virtual environments (VEs) are suggested to encourage users to exercise regularly. The benefits of chronic exercise on cognitive performance are well documented in non-VE neurophysiological and behavioural studies. Based on event-related potentials (ERP) such as the N200 and P300, cognitive processing may be interpreted on a neuronal level. However, exercise-related neuroelectric adaptation in VE remains widely unclear and thus characterizes the primary aim of the present study. Twenty-two healthy participants performed active (moderate cycling exercise) and passive (no exercise) sessions in three VEs (control, front, surround), each generating a different sense of presence. Within sessions, conditions were randomly assigned, each lasting 5 min and including a choice reaction-time task to assess cognitive performance. According to the international 10:20 system, EEG with real-time triggered stimulus onset was recorded, and peaks of N200 and P300 components (amplitude, latency) were exported for analysis. Heart rate was recorded, and sense of presence assessed prior to and following each session and condition. Results revealed an increase in ERP amplitudes (N200: p < 0.001; P300: p < 0.001) and latencies (N200: p < 0.001) that were most pronounced over fronto-central and occipital electrode sites relative to an increased sense of presence (p < 0.001); however, ERP were not modulated by exercise (each p > 0.05). Hypothesized to mirror cognitive processing, decreases of cognitive performance's accuracy and reaction time failed significance. With respect to previous research, the present neuroelectric adaptation gives reason to believe in compensative neuronal resources that balance demanding cognitive processing in VE to avoid behavioural inefficiency.

  1. Fabrication and evaluation of a microspring contact array using a reel-to-reel continuous fiber process

    NASA Astrophysics Data System (ADS)

    Khumpuang, S.; Ohtomo, A.; Miyake, K.; Itoh, T.

    2011-10-01

    In this work a novel patterning technique for fabrication of a conductive microspring array as an electrical contact structure directly on fiber substrate is introduced. Using low-temperature compression from the nanoimprinting technique to generate a gradient depth on the desired pattern, PEDOT: PSS film, the hair-like structures are released as bimorph microspring cantilevers. The microspring is in the form of a stress-engineered cantilever arranged in rows. The microspring contact array is employed in composing the electrical circuit through a large area of woven textile, and functions as the electrical contact between weft ribbon and warp ribbon. The spring itself has a contact resistance of 480 Ω to the plain PEDOT:PSS-coated ribbon, which shows a promising electrical transfer ability within the limitations of materials employed for reel-to-reel continuous processes. The microspring contact structures enhanced the durability, flexibility and stability of electrical contact in the woven textile better than those of the ribbons without the microspring. The contact experiment was repeated over 500 times, losing only 20 Ω of the resistance. Furthermore, to realize the spring structure, CYTOP is used as the releasing layer due to its low adhesive force to the fiber substrate. Moreover the first result of patternable CYTOP using nano-imprinting lithography is included.

  2. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    PubMed Central

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734

  3. LSSA (Low-cost Silicon Solar Array) project

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Methods are explored for economically generating electrical power to meet future requirements. The Low-Cost Silicon Solar Array Project (LSSA) was established to reduce the price of solar arrays by improving manufacturing technology, adapting mass production techniques, and promoting user acceptance. The new manufacturing technology includes the consideration of new silicon refinement processes, silicon sheet growth techniques, encapsulants, and automated assembly production being developed under contract by industries and universities.

  4. A Module Experimental Process System Development Unit (MEPSDU). [development of low cost solar arrays

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per Watt peak was demonstrated. The proposed process sequence was reviewed and laboratory verification experiments were conducted. The preliminary process includes the following features: semicrystalline silicon (10 cm by 10 cm) as the silicon input material; spray on dopant diffusion source; Al paste BSF formation; spray on AR coating; electroless Ni plate solder dip metallization; laser scribe edges; K & S tabbing and stringing machine; and laminated EVA modules.

  5. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.

  6. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  7. Therapeutic adherence and competence scales for Developmentally Adapted Cognitive Processing Therapy for adolescents with PTSD

    PubMed Central

    Gutermann, Jana; Schreiber, Franziska; Matulis, Simone; Stangier, Ulrich; Rosner, Rita; Steil, Regina

    2015-01-01

    Background The assessment of therapeutic adherence and competence is often neglected in psychotherapy research, particularly in children and adolescents; however, both variables are crucial for the interpretation of treatment effects. Objective Our aim was to develop, adapt, and pilot two scales to assess therapeutic adherence and competence in a recent innovative program, Developmentally Adapted Cognitive Processing Therapy (D-CPT), for adolescents suffering from posttraumatic stress disorder (PTSD) after childhood abuse. Method Two independent raters assessed 30 randomly selected sessions involving 12 D-CPT patients (age 13–20 years, M age=16.75, 91.67% female) treated by 11 therapists within the pilot phase of a multicenter study. Results Three experts confirmed the relevance and appropriateness of each item. All items and total scores for adherence (intraclass correlation coefficients [ICC]=0.76–1.00) and competence (ICC=0.78–0.98) yielded good to excellent inter-rater reliability. Cronbach's alpha was 0.59 for the adherence scale and 0.96 for the competence scale. Conclusions The scales reliably assess adherence and competence in D-CPT for adolescent PTSD patients. The ratings can be helpful in the interpretation of treatment effects, the assessment of mediator variables, and the identification and training of therapeutic skills that are central to achieving good treatment outcomes. Both adherence and competence will be assessed as possible predictor variables for treatment success in future D-CPT trials. PMID:25791915

  8. Planning an adaptive management process for biodiversity conservation and resource development in the Camisea River Basin.

    PubMed

    Dallmeier, Francisco; Alonso, Alfonso; Jones, Murray

    2002-05-01

    The Smithsonian Institution's Monitoring and Assessment of Biodiversity Program joined Shell Prospecting and Development Peru (SPDP) to protect biodiversity during a natural gas exploration project. Emphasis was on long-term societal and environmental benefits in addition to financial gain for the company. The systematic, cyclical adaptive management process was used to generate feedback for SPDP managers. Adaptive management enables ongoing improvement of management policies and practices based on lessons learned from operational activities. Previous to this study, very little information about the local biodiversity was available. Over a 2-year period, the team conducted biological assessments of six taxonomic groups at five sites located within 600 km2. A broad range of management options such as location, timing and technology were developed from the beginning of the project. They were considered in conjunction with emerging lessons from the biodiversity assessments. Critical decisions included location of a gas plant and the cost of helicopter access versus roads to service the full field development. Both of these decisions were evaluated to ensure that they were economically and environmentally feasible. Project design changes, addressed in the planning stage, were accepted once consensus was achieved. Stakeholders were apprised of the implications of the baseline biodiversity assessments.

  9. Does Variation in Genome Sizes Reflect Adaptive or Neutral Processes? New Clues from Passiflora

    PubMed Central

    Fonseca, Tamara C.; Salzano, Francisco M.; Bonatto, Sandro L.; Freitas, Loreta B.

    2011-01-01

    One of the long-standing paradoxes in genomic evolution is the observation that much of the genome is composed of repetitive DNA which has been typically regarded as superfluous to the function of the genome in generating phenotypes. In this work, we used comparative phylogenetic approaches to investigate if the variations in genome sizes (GS) should be considered as adaptive or neutral processes by the comparison between GS and flower diameters (FD) of 50 Passiflora species, more specifically, within its two most species-rich subgenera, Passiflora and Decaloba. For this, we have constructed a phylogenetic tree of these species, estimated GS and FD of them, inferred the tempo and mode of evolution of these traits and their correlations, using both current and phylogenetically independent contrasted values. We found significant correlations among the traits, when considering the complete set of data or only the subgenus Passiflora, whereas no correlations were observed within Decaloba. Herein, we present convincing evidence of adaptive evolution of GS, as well as clues that this pattern is limited by a minimum genome size, which could reduce both the possibilities of changes in GS and the possibility of phenotypic responses to environment changes. PMID:21464897

  10. Analysis of adaptive forward-backward diffusion flows with applications in image processing

    NASA Astrophysics Data System (ADS)

    Surya Prasath, V. B.; Urbano, José Miguel; Vorotnikov, Dmitry

    2015-10-01

    The nonlinear diffusion model introduced by Perona and Malik (1990 IEEE Trans. Pattern Anal. Mach. Intell. 12 629-39) is well suited to preserve salient edges while restoring noisy images. This model overcomes well-known edge smearing effects of the heat equation by using a gradient dependent diffusion function. Despite providing better denoizing results, the analysis of the PM scheme is difficult due to the forward-backward nature of the diffusion flow. We study a related adaptive forward-backward diffusion equation which uses a mollified inverse gradient term engrafted in the diffusion term of a general nonlinear parabolic equation. We prove a series of existence, uniqueness and regularity results for viscosity, weak and dissipative solutions for such forward-backward diffusion flows. In particular, we introduce a novel functional framework for wellposedness of flows of total variation type. A set of synthetic and real image processing examples are used to illustrate the properties and advantages of the proposed adaptive forward-backward diffusion flows.

  11. Adaptive Integrated Optical Bragg Grating in Semiconductor Waveguide Suitable for Optical Signal Processing

    NASA Astrophysics Data System (ADS)

    Moniem, T. A.

    2016-05-01

    This article presents a methodology for an integrated Bragg grating using an alloy of GaAs, AlGaAs, and InGaAs with a controllable refractive index to obtain an adaptive Bragg grating suitable for many applications on optical processing and adaptive control systems, such as limitation and filtering. The refractive index of a Bragg grating is controlled by using an external electric field for controlling periodic modulation of the refractive index of the active waveguide region. The designed Bragg grating has refractive indices programmed by using that external electric field. This article presents two approaches for designing the controllable refractive indices active region of a Bragg grating. The first approach is based on the modification of a planar micro-strip structure of the iGaAs traveling wave as the active region, and the second is based on the modification of self-assembled InAs/GaAs quantum dots of an alloy from GaAs and InGaAs with a GaP traveling wave. The overall design and results are discussed through numerical simulation by using the finite-difference time-domain, plane wave expansion, and opto-wave simulation methods to confirm its operation and feasibility.

  12. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  13. Usability of clinical decision support system as a facilitator for learning the assistive technology adaptation process.

    PubMed

    Danial-Saad, Alexandra; Kuflik, Tsvi; Weiss, Patrice L Tamar; Schreuer, Naomi

    2016-01-01

    The aim of this study was to evaluate the usability of Ontology Supported Computerized Assistive Technology Recommender (OSCAR), a Clinical Decision Support System (CDSS) for the assistive technology adaptation process, its impact on learning the matching process, and to determine the relationship between its usability and learnability. Two groups of expert and novice clinicians (total, n = 26) took part in this study. Each group filled out system usability scale (SUS) to evaluate OSCAR's usability. The novice group completed a learning questionnaire to assess OSCAR's effect on their ability to learn the matching process. Both groups rated OSCAR's usability as "very good", (M [SUS] = 80.7, SD = 11.6, median = 83.7) by the novices, and (M [SUS] = 81.2, SD = 6.8, median = 81.2) by the experts. The Mann-Whitney results indicated that no significant differences were found between the expert and novice groups in terms of OSCAR's usability. A significant positive correlation existed between the usability of OSCAR and the ability to learn the adaptation process (rs = 0.46, p = 0.04). Usability is an important factor in the acceptance of a system. The successful application of user-centered design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically in developing other systems. Implications for Rehabilitation Creating a CDSS with a focus on its usability is an important factor for its acceptance by its users. Successful usability outcomes can impact the learning process of the subject matter in general, and the AT prescription process in particular. The successful application of User-Centered Design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically. The study emphasizes the importance of close collaboration between the developers and

  14. System and method for cognitive processing for data fusion

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor)

    2012-01-01

    A system and method for cognitive processing of sensor data. A processor array receiving analog sensor data and having programmable interconnects, multiplication weights, and filters provides for adaptive learning in real-time. A static random access memory contains the programmable data for the processor array and the stored data is modified to provide for adaptive learning.

  15. Flat-plate solar array project process development area process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  16. Coal liquefaction process streams characterization and evaluation: High performance liquid chromatography (HPLC) of coal liquefaction process streams using normal-phase separation with uv diode array detection

    SciTech Connect

    Clifford, D.J.; McKinney, D.E.; Hou, Lei; Hatcher, P.G.

    1994-01-01

    This study demonstrated the considerable potential of using two-dimensional, high performance liquid chromatography (HPLC) with normal-phase separation and ultraviolet (UV) diode array detection for the examination of filtered process liquids and the 850{degrees}F{sup {minus}} distillate materials derived from direct coal liquefaction process streams. A commercially available HPLC column (Hypersil Green PAH-2) provided excellent separation of the complex mixture of polynuclear aromatic hydrocarbons (PAHs) found in coal-derived process streams process. Some characteristics of the samples delineated by separation could be attributed to processing parameters. Mass recovery of the process derived samples was low (5--50 wt %). Penn State believes, however, that, improved recovery can be achieved. High resolution mass spectrometry and gas chromatography/mass spectrometry (GC/MS) also were used in this study to characterize the samples and the HPLC fractions. The GC/MS technique was used to preliminarily examine the GC-elutable portion of the samples. The GC/MS data were compared with the data from the HPLC technique. The use of an ultraviolet detector in the HPLC work precludes detecting the aliphatic portion of the sample. The GC/MS allowed for identification and quantification of that portion of the samples. Further development of the 2-D HPLC analytical method as a process development tool appears justified based on the results of this project.

  17. Coherent optical adaptive techniques.

    PubMed

    Bridges, W B; Brunner, P T; Lazzara, S P; Nussmeier, T A; O'Meara, T R; Sanguinet, J A; Brown, W P

    1974-02-01

    The theory of multidither adaptive optical radar phased arrays is briefly reviewed as an introduction to the experimental results obtained with seven-element linear and three-element triangular array systems operating at 0.6328 microm. Atmospheric turbulence compensation and adaptive tracking capabilities are demonstrated.

  18. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  19. Regulating adaptive immune responses using small molecule modulators of aminopeptidases that process antigenic peptides.

    PubMed

    Stratikos, Efstratios

    2014-12-01

    Antigenic peptide processing by intracellular aminopeptidases has emerged recently as an important pathway that regulates adaptive immune responses. Pathogens and cancer can manipulate the activity of key enzymes of this pathway to promote immune evasion. Furthermore, the activity of these enzymes is naturally variable due to polymorphic variation, contributing to predisposition to disease, most notably autoimmunity. Here, we review recent findings that suggest that the pharmacological regulation of the activity of these aminopeptidases constitutes a valid approach for regulating human immune responses. We furthermore review the state of the art in chemical tools for inhibiting these enzymes and how these tools can be useful for the development of innovative therapeutic approaches for a variety of diseases including cancer, viral infections and autoimmunity.

  20. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  1. Molecular Mechanisms Mediating the Adaptive Regulation of Intestinal Riboflavin Uptake Process.

    PubMed

    Subramanian, Veedamali S; Ghosal, Abhisek; Kapadia, Rubina; Nabokina, Svetlana M; Said, Hamid M

    2015-01-01

    The intestinal absorption process of vitamin B2 (riboflavin, RF) is carrier-mediated, and all three known human RF transporters, i.e., hRFVT-1, -2, and -3 (products of the SLC52A1, 2 & 3 genes, respectively) are expressed in the gut. We have previously shown that the intestinal RF uptake process is adaptively regulated by substrate level, but little is known about the molecular mechanism(s) involved. Using human intestinal epithelial NCM460 cells maintained under RF deficient and over-supplemented (OS) conditions, we now show that the induction in RF uptake in RF deficiency is associated with an increase in expression of the hRFVT-2 & -3 (but not hRFVT-1) at the protein and mRNA levels. Focusing on hRFVT-3, the predominant transporter in the intestine, we also observed an increase in the level of expression of its hnRNA and activity of its promoter in the RF deficiency state. An increase in the level of expression of the nuclear factor Sp1 (which is important for activity of the SLC52A3 promoter) was observed in RF deficiency, while mutating the Sp1/GC site in the SLC52A3 promoter drastically decreased the level of induction in SLC52A3 promoter activity in RF deficiency. We also observed specific epigenetic changes in the SLC52A3 promoter in RF deficiency. Finally, an increase in hRFVT-3 protein expression at the cell surface was observed in RF deficiency. Results of these investigations show, for the first time, that transcriptional and post-transcriptional mechanisms are involved in the adaptive regulation of intestinal RF uptake by the prevailing substrate level.

  2. Molecular Mechanisms Mediating the Adaptive Regulation of Intestinal Riboflavin Uptake Process

    PubMed Central

    Subramanian, Veedamali S.; Ghosal, Abhisek; Kapadia, Rubina; Nabokina, Svetlana M.; Said, Hamid M.

    2015-01-01

    The intestinal absorption process of vitamin B2 (riboflavin, RF) is carrier-mediated, and all three known human RF transporters, i.e., hRFVT-1, -2, and -3 (products of the SLC52A1, 2 & 3 genes, respectively) are expressed in the gut. We have previously shown that the intestinal RF uptake process is adaptively regulated by substrate level, but little is known about the molecular mechanism(s) involved. Using human intestinal epithelial NCM460 cells maintained under RF deficient and over-supplemented (OS) conditions, we now show that the induction in RF uptake in RF deficiency is associated with an increase in expression of the hRFVT-2 & -3 (but not hRFVT-1) at the protein and mRNA levels. Focusing on hRFVT-3, the predominant transporter in the intestine, we also observed an increase in the level of expression of its hnRNA and activity of its promoter in the RF deficiency state. An increase in the level of expression of the nuclear factor Sp1 (which is important for activity of the SLC52A3 promoter) was observed in RF deficiency, while mutating the Sp1/GC site in the SLC52A3 promoter drastically decreased the level of induction in SLC52A3 promoter activity in RF deficiency. We also observed specific epigenetic changes in the SLC52A3 promoter in RF deficiency. Finally, an increase in hRFVT-3 protein expression at the cell surface was observed in RF deficiency. Results of these investigations show, for the first time, that transcriptional and post-transcriptional mechanisms are involved in the adaptive regulation of intestinal RF uptake by the prevailing substrate level. PMID:26121134

  3. Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation

    PubMed Central

    Ferraro, Jeffrey P; Daumé, Hal; DuVall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J

    2013-01-01

    Objective Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Methods Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. Results The evaluated POS taggers drop in accuracy by 8.5–15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3–91.0% on clinical texts. ClinAdapt reports 93.2–93.9%. Conclusions ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks. PMID:23486109

  4. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  5. Automating the design of image processing pipelines for novel color filter arrays: local, linear, learned (L3) method

    NASA Astrophysics Data System (ADS)

    Tian, Qiyuan; Lansel, Steven; Farrell, Joyce E.; Wandell, Brian A.

    2014-03-01

    The high density of pixels in modern color sensors provides an opportunity to experiment with new color filter array (CFA) designs. A significant bottleneck in evaluating new designs is the need to create demosaicking, denoising and color transform algorithms tuned for the CFA. To address this issue, we developed a method(local, linear, learned or L3) for automatically creating an image processing pipeline. In this paper we describe the L3 algorithm and illustrate how we created a pipeline for a CFA organized as a 2×2 RGB/Wblock containing a clear (W) pixel. Under low light conditions, the L3 pipeline developed for the RGB/W CFA produces images that are superior to those from a matched Bayer RGB sensor. We also use L3 to learn pipelines for other RGB/W CFAs with different spatial layouts. The L3 algorithm shortens the development time for producing a high quality image pipeline for novel CFA designs.

  6. Liquid Chromatography-diode Array Detector-electrospray Mass Spectrometry and Principal Components Analyses of Raw and Processed Moutan Cortex

    PubMed Central

    Deng, Xian-Mei; Yu, Jiang-Yong; Ding, Meng-Jin; Zhao, Ming; Xue, Xing-Yang; Che, Chun-Tao; Wang, Shu-Mei; Zhao, Bin; Meng, Jiang

    2016-01-01

    Background: Raw Moutan Cortex (RMC), derived from the root bark of Paeonia suffruticosa, and Processed Moutan Cortex (PMC) is obtained from RMC by undergoing a stir-frying process. Both of them are indicated for different pharmacodynamic action in traditional Chinese medicine, and they have been used in China and other Asian countries for thousands of years. Objective: To establish a method to study the RMC and PMC, revealing their different chemical composition by fingerprint, qualitative, and quantitative ways. Materials and Methods: High-performance liquid chromatography coupled with diode array detector and electrospray mass spectrometry (HPLC-DAD-ESIMS) were used for the analysis. Therefore, the analytes were separated on an Ultimate TM XB-C18 analytical column (250 mm × 4.6 mm, 5.0 μm) with a gradient elution program by a mobile phase consisting of acetonitrile and 0.1% (v/v) formic acid water solution. The flow rate, injection volume, detection wavelength, and column temperature were set at 1.0 mL/min, 10 μL, 254 nm, and 30°C, respectively. Besides, principal components analysis and the test of significance were applied in data analysis. Results: The results clearly showed a significant difference among RMC and PMC, indicating the significant changes in their chemical compositions before and after the stir-frying process. Conclusion: The HPLC-DAD-ESIMS coupled with chemometrics analysis could be used for comprehensive quality evaluation of raw and processed Moutan Cortex. SUMMARY The experiment study the RMC and PMC by HPLC-DAD-ESIMS couple with chemometrics analysis. The results of their fingerprints, qualitative, and quantitative all clearly showed significant changes in their chemical compositions before and after stir-frying processed. Abbreviation used: HPLC-DAD-ESIMS: High-performance Liquid Chromatography-Diode Array Detector-Electrospray Mass Spectrometry, RMC: Raw moutan cortex, PMC: Processed moutan cortex, TCM: Traditional Chinese medicine

  7. Reaction efficiency of diffusion-controlled processes on finite aperiodic planar arrays. II. Potential effects

    NASA Astrophysics Data System (ADS)

    Garza-López, Roberto A.; Brzezinski, Jack; Low, Daniel; Gomez, Ulysses; Raju, Swaroop; Ramirez, Craig; Kozak, John J.

    2009-08-01

    We continue our study of diffusion-reaction processes on finite aperiodic lattices, viz., the Penrose lattice and a Girih tiling. Focusing on bimolecular reactions, we mobilize the theory of finite Markov processes to document the effect of attractive forces on the reaction efficiency. Considering both a short-range square-well potential and a longer-range 1/ r S ( S = 4, 6) potential, we find that irreversible reactive encounters between reactants on a Girih platelet are kinetically advantaged relative to processes on a Penrose platelet. This result generalizes the conclusion reached in our earlier study [Roberto A. Garza-López, Aaron Kaufman, Reena Patel, Joseph Chang, Jack Brzezinski, John J. Kozak, Chem. Phys. Lett. 459 (2008) 137] where entropic factors (only) were assessed.

  8. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    NASA Astrophysics Data System (ADS)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  9. Coherent-subspace array processing based on wavelet covariance: an application to broad-band, seismo-volcanic signals

    NASA Astrophysics Data System (ADS)

    Saccorotti, G.; Nisii, V.; Del Pezzo, E.

    2008-07-01

    Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP

  10. Processing of translational and rotational motions of surface waves: performance analysis and applications to single sensor and to array measurements

    NASA Astrophysics Data System (ADS)

    Maranò, Stefano; Fäh, Donat

    2014-01-01

    The analysis of rotational seismic motions has received considerable attention in the last years. Recent advances in sensor technologies allow us to measure directly the rotational components of the seismic wavefield. Today this is achieved with improved accuracy and at an affordable cost. The analysis and the study of rotational motions are, to a certain extent, less developed than other aspects of seismology due to the historical lack of instrumental observations. This is due to both the technical challenges involved in measuring rotational motions and to the widespread belief that rotational motions are insignificant. This paper addresses the joint processing of translational and rotational motions from both the theoretical and the practical perspectives. Our attention focuses on the analysis of motions of both Rayleigh waves and Love waves from recordings of single sensors and from an array of sensors. From the theoretical standpoint, analysis of Fisher information (FI) allows us to understand how the different measurement types contribute to the estimation of quantities of geophysical interest. In addition, we show how rotational measurements resolve ambiguity on parameter estimation in the single sensor setting. We quantify the achievable estimation accuracy by means of Cramér-Rao bound (CRB). From the practical standpoint, a method for the joint processing of rotational and translational recordings to perform maximum likelihood (ML) estimation is presented. The proposed technique estimates parameters of Love waves and Rayleigh waves from single sensor or array recordings. We support and illustrate our findings with a comprehensive collection of numerical examples. Applications to real recordings are also shown.

  11. An array method for detection, location and characterization of multi-scale seismic energy release associated to the deformation processes of active subduction zones

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Satriano, C.; Bernard, P.; Vilotte, J.; Obara, K.

    2013-12-01

    Detection, location and characterization of the seismic energy release associated to deformation processes in active subduction zones are fundamental for understanding the dynamics of active deformation and the mechanisms of generation and rupturing of large subduction earthquakes. The statistical analysis of this seismic energy release, spanning a wide range of space and time scales, as well as phenomena, (e.g., earthquakes, seismic repeaters, low and very low-frequency earthquakes, tectonic tremors) can provide original insides to the problem. We developed a new methodology exploiting the frequency selective coherence of the wave field at dense seismic arrays and local antennas that leads to stable and reliable detection, blind source separation, and location of distributed non-stationary sources. The methodology consist of: (1) a signal processing scheme yielding a simplified representation of a seismic signal by an adaptive time-frequency characterization of its statistical properties; (2) a fully probabilistic detection and location algorithm based on back projection of stacked local cross-correlations of the simplified signals. This new approach has been developed and tested on the Shikoku region in Japan, which is an exceptional field laboratory, due to its high seismic activity comprising a wide variety of phenomena observed by the dense Hi-net seismic network operated by NIED. We evaluate the capability and potential of the proposed methodology to detect, locate and characterize the energy release associated to possibly overlapping seismic radiation from earthquakes and low-frequency tectonic tremors. As future direction we also discuss an application to the International Maule Aftershock Deployment (IMAD) in Chile.

  12. An array method for detection, location and characterization of multi-scale seismic energy release associated to the deformation processes of active subduction zones

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Satriano, C.; Bernard, P.; Vilotte, J.; Obara, K.

    2011-12-01

    Detection, location and characterization of the seismic energy release associated to deformation processes in active subduction zones are fundamental for understanding the dynamics of active deformation and the mechanisms of generation and rupturing of large subduction earthquakes. The statistical analysis of this seismic energy release, spanning a wide range of space and time scales, as well as phenomena, (e.g., earthquakes, seismic repeaters, low and very low-frequency earthquakes, tectonic tremors) can provide original insides to the problem. We developed a new methodology exploiting the frequency selective coherence of the wave field at dense seismic arrays and local antennas that leads to stable and reliable detection, blind source separation, and location of distributed non-stationary sources. The methodology consist of: (1) a signal processing scheme yielding a simplified representation of a seismic signal by an adaptive time-frequency characterization of its statistical properties; (2) a fully probabilistic detection and location algorithm based on back projection of stacked local cross-correlations of the simplified signals. This new approach has been developed and tested on the Shikoku region in Japan, which is an exceptional field laboratory, due to its high seismic activity comprising a wide variety of phenomena observed by the dense Hi-net seismic network operated by NIED. We evaluate the capability and potential of the proposed methodology to detect, locate and characterize the energy release associated to possibly overlapping seismic radiation from earthquakes and low-frequency tectonic tremors. As future direction we also discuss an application to the International Maule Aftershock Deployment (IMAD) in Chile.

  13. Phoneme restoration and empirical coverage of Interactive Activation and Adaptive Resonance models of human speech processing.

    PubMed

    Grossberg, Stephen; Kazerounian, Sohrob

    2016-08-01

    Magnuson [J. Acoust. Soc. Am. 137, 1481-1492 (2015)] makes claims for Interactive Activation (IA) models and against Adaptive Resonance Theory (ART) models of speech perception. Magnuson also presents simulations that claim to show that the TRACE model can simulate phonemic restoration, which was an explanatory target of the cARTWORD ART model. The theoretical analysis and review herein show that these claims are incorrect. More generally, the TRACE and cARTWORD models illustrate two diametrically opposed types of neural models of speech and language. The TRACE model embodies core assumptions with no analog in known brain processes. The cARTWORD model defines a hierarchy of cortical processing regions whose networks embody cells in laminar cortical circuits as part of the paradigm of laminar computing. cARTWORD further develops ART speech and language models that were introduced in the 1970s. It builds upon Item-Order-Rank working memories, which activate learned list chunks that unitize sequences to represent phonemes, syllables, and words. Psychophysical and neurophysiological data support Item-Order-Rank mechanisms and contradict TRACE representations of time, temporal order, silence, and top-down processing that exhibit many anomalous properties, including hallucinations of non-occurring future phonemes. Computer simulations of the TRACE model are presented that demonstrate these failures.

  14. Power and Performance Trade-offs for Space Time Adaptive Processing

    SciTech Connect

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-07-27

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementation on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.

  15. Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter

    NASA Technical Reports Server (NTRS)

    Aggarwal, Pravin; Hull, Patrick V.

    2015-01-01

    Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.

  16. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  17. Phoneme restoration and empirical coverage of Interactive Activation and Adaptive Resonance models of human speech processing.

    PubMed

    Grossberg, Stephen; Kazerounian, Sohrob

    2016-08-01

    Magnuson [J. Acoust. Soc. Am. 137, 1481-1492 (2015)] makes claims for Interactive Activation (IA) models and against Adaptive Resonance Theory (ART) models of speech perception. Magnuson also presents simulations that claim to show that the TRACE model can simulate phonemic restoration, which was an explanatory target of the cARTWORD ART model. The theoretical analysis and review herein show that these claims are incorrect. More generally, the TRACE and cARTWORD models illustrate two diametrically opposed types of neural models of speech and language. The TRACE model embodies core assumptions with no analog in known brain processes. The cARTWORD model defines a hierarchy of cortical processing regions whose networks embody cells in laminar cortical circuits as part of the paradigm of laminar computing. cARTWORD further develops ART speech and language models that were introduced in the 1970s. It builds upon Item-Order-Rank working memories, which activate learned list chunks that unitize sequences to represent phonemes, syllables, and words. Psychophysical and neurophysiological data support Item-Order-Rank mechanisms and contradict TRACE representations of time, temporal order, silence, and top-down processing that exhibit many anomalous properties, including hallucinations of non-occurring future phonemes. Computer simulations of the TRACE model are presented that demonstrate these failures. PMID:27586743

  18. An Array Processing Theory of Memory, Thought, and Behavior Patterning: A Radically Reconstructive View.

    ERIC Educational Resources Information Center

    Allison, Dennis J.

    A theory of memory is introduced, which seeks to respond to the shortcomings of existing theories based on metaphors. Memory is presented as a mechanism, a comparison process in which information held in some form of immediate storage (whether based on perception or previous cognition or both) is compared to previously stored long-term storage.…

  19. Microfluidic chemical processing with on-chip washing by deterministic lateral displacement arrays with separator walls

    PubMed Central

    Chen, Yu; D'Silva, Joseph; Austin, Robert H.; Sturm, James C.

    2015-01-01

    We describe a microfluidic device for on-chip chemical processing, such as staining, and subsequent washing of cells. The paper introduces “separator walls” to increase the on-chip incubation time and to improve the quality of washing. Cells of interest are concentrated into a treatment stream of chemical reagents at the first separator wall for extended on-chip incubation without causing excess contamination at the output due to diffusion of the unreacted treatment chemicals, and then are directed to the washing stream before final collections. The second separator wall further reduces the output contamination from diffusion to the washing stream. With this approach, we demonstrate on-chip leukocyte staining with Rhodamine 6G and washing. The results suggest that other conventional biological and analytical processes could be replaced by the proposed device. PMID:26396659

  20. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  1. Research on detection method of end gap of piston rings based on area array CCD and image processing

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Wang, Zhong; Liu, Qi; Li, Lin

    2012-01-01

    Piston ring is one of the most important parts in internal combustion engine, and the width of end gap is an important parameter which should be detected one by one. In comparison to the previous measurements of end gap, a new efficient detection method is presented based on computer vision and image processing theory. This paper describes the framework and measuring principle of the measurement system. In which, the image processing algorithm is highlighted. Firstly, the partial end gap image of piston ring is acquired by the area array CCD; secondly, the end gap edge contour which is connected by single pixel is obtained by grayscale threshold segmentation, mathematical morphology contour edge detection, contour trace and other image processing tools; finally, the distance between the two end gap edge contour lines is calculated by using the least distance method of straight-line fitting. It has been proved by the repetitive experiments that the measurement accuracy can reach 0.01mm. What's more, the detection efficiency of automatic inspected instrument on parameters of piston ring based on this method can reach 10~12 pieces/min.

  2. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    NASA Technical Reports Server (NTRS)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  3. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  4. Earthquake processes and geologic structure of the San Andreas Fault at Parkfield through the SAFOD seismic array

    NASA Astrophysics Data System (ADS)

    Chavarria, Juan Andres

    The San Andreas Fault Observatory at Depth (SAFOD) has the goal of understanding earthquake processes at hypocentral depths. In July 2002 Duke University installed a vertical array of seismometers in the SAFOD Pilot Hole (PH). Seismograms recorded by the array give insights into the structure of the SAFOD site. The ratios of P- and S-wave velocities (Vp/Vs) along the array suggest the presence of two faults intersecting the PH. The Vp/Vs ratios also depend on source location, with high values for sources to the northwest along the San Andreas, and lower ones to the southeast. This distribution correlates with high and low creep rates along the SAF. Since higher Vp/Vs ratios can be produced by increasing fluid saturation, this effect could be the one guiding the frequent seismicity and creep along this segment of the fault. The SAFOD PH Vertical Seismic Profiling-seismograms from nearby microearthquake and explosion sources also contain secondary signals between the P- and S-waves. These signals are shown to be P and S waves scattered by the local structure. Kirchhoff migration was applied to define the origin points of these scattered signals. Both 2D and 3D analysis of microearthquake and explosion seismograms showed that the collected scattering points form planar surfaces, interpreted as a vertical San Andreas Fault and four other secondary faults forming a flower structure. These structures along with seismicity located in secondary fault strands suggest that stresses along the San Andreas at Parkfield could be distributed in more complex ways, modifying the local earthquake cycle. Modeling of scattered phases indicates strong geologic contrasts that have recently been drilled by SAFOD. A granite-sediment interface may constitute the boundary of a hanging block with sedimentary materials with low electrical resistivities. Shallow earthquakes at Parkfield take place at the interface of the northeastern boundary of this block, adjacent to the San Andreas Fault

  5. Predicting Health Care Cost Transitions Using a Multidimensional Adaptive Prediction Process.

    PubMed

    Guo, Xiaobo; Gandy, William; Coberley, Carter; Pope, James; Rula, Elizabeth; Wells, Aaron

    2015-08-01

    Managing population health requires meeting individual care needs while striving for increased efficiency and quality of care. Predictive models can integrate diverse data to provide objective assessment of individual prospective risk to identify individuals requiring more intensive health management in the present. The purpose of this research was to develop and test a predictive modeling approach, Multidimensional Adaptive Prediction Process (MAPP). MAPP is predicated on dividing the population into cost cohorts and then utilizing a collection of models and covariates to optimize future cost prediction for individuals in each cohort. MAPP was tested on 3 years of administrative health care claims starting in 2009 for health plan members (average n=25,143) with evidence of coronary heart disease. A "status quo" reference modeling methodology applied to the total annual population was established for comparative purposes. Results showed that members identified by MAPP contributed $7.9 million and $9.7 million more in 2011 health care costs than the reference model for cohorts increasing in cost or remaining high cost, respectively. Across all cohorts, the additional accurate cost capture of MAPP translated to an annual difference of $1882 per member, a 21% improvement, relative to the reference model. The results demonstrate that improved future cost prediction is achievable using a novel adaptive multiple model approach. Through accurate prospective identification of individuals whose costs are expected to increase, MAPP can help health care entities achieve efficient resource allocation while improving care quality for emergent need individuals who are intermixed among a diverse set of health care consumers.

  6. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity

  7. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  8. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics). PMID:24806652

  9. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  10. Adaptive Signal Processing Testbed: VME-based DSP board market survey

    NASA Astrophysics Data System (ADS)

    Ingram, Rick E.

    1992-04-01

    The Adaptive Signal Processing Testbed (ASPT) is a real-time multiprocessor system utilizing digital signal processor technology on VMEbus based printed circuit boards installed on a Sun workstation. The ASPT has specific requirements, particularly as regards to the signal excision application, with respect to interfacing with current and planned data generation equipment, processing of the data, storage to disk of final and intermediate results, and the development tools for applications development and integration into the overall EW/COM computing environment. A prototype ASPT was implemented using three VME-C-30 boards from Applied Silicon. Experience gained during the prototype development led to the conclusions that interprocessor communications capability is the most significant contributor to overall ASPT performance. In addition, the host involvement should be minimized. Boards using different processors were evaluated with respect to the ASPT system requirements, pricing, and availability. Specific recommendations based on various priorities are made as well as recommendations concerning the integration and interaction of various tools developed during the prototype implementation.

  11. Flat-plate solar array project process development area, process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1984-01-01

    The program is designed to investigate the fabrication of solar cells on N-type base material by a simultaneous diffusion of N-type and P-type dopants to form an P(+)NN(+) structure. The results of simultaneous diffusion experiments are being compared to cells fabricated using sequential diffusion of dopants into N-base material in the same resistivity range. The process used for the fabrication of the simultaneously diffused P(+)NN(+) cells follows the standard Westinghouse baseline sequence for P-base material except that the two diffusion processes (boron and phosphorus) are replaced by a single diffusion step. All experiments are carried out on N-type dendritic web grown in the Westinghouse pre-pilot facility. The resistivities vary from 0.5 (UC OMEGA)cm to 5 (UC OMEGA)cm. The dopant sources used for both the simultaneous and sequential diffusion experiments are commercial metallorganic solutions with phosphorus or boron components. After these liquids are applied to the web surface, they are baked to form a hard glass which acts as a diffusion source at elevated temperatures. In experiments performed thus far, cells produced in sequential diffusion tests have properties essentially equal to the baseline N(+)PP(+) cells. However, the simultaneous diffusions have produced cells with much lower IV characteristics mainly due to cross-doping of the sources at the diffusion temperature. This cross-doping is due to the high vapor pressure phosphorus (applied as a metallorganic to the back surface) diffusion through the SiO2 mask and then acting as a diffusant source for the front surface.

  12. Direct growth of comet-like superstructures of Au-ZnO submicron rod arrays by solvothermal soft chemistry process

    SciTech Connect

    Shen Liming; Bao, Ningzhong Yanagisawa, Kazumichi; Zheng, Yanqing; Domen, Kazunari; Gupta, Arunava; Grimes, Craig A.

    2007-01-15

    The synthesis, characterization and proposed growth process of a new kind of comet-like Au-ZnO superstructures are described here. This Au-ZnO superstructure was directly created by a simple and mild solvothermal reaction, dissolving the reactants of zinc acetate dihydrate and hydrogen tetrachloroaurate tetrahydrate (HAuCl{sub 4}.4H{sub 2}O) in ethylenediamine and taking advantage of the lattice matching growth between definitized ZnO plane and Au plane and the natural growth habit of the ZnO rods along [001] direction in solutions. For a typical comet-like Au-ZnO superstructure, its comet head consists of one hemispherical end of a central thick ZnO rod and an outer Au-ZnO thin layer, and its comet tail consists of radially standing ZnO submicron rod arrays growing on the Au-ZnO thin layer. These ZnO rods have diameters in range of 0.2-0.5 {mu}m, an average aspect ratio of about 10, and lengths of up to about 4 {mu}m. The morphology, size and structure of the ZnO superstructures are dependent on the concentration of reactants and the reaction time. The HAuCl{sub 4}.4H{sub 2}O plays a key role for the solvothermal growth of the comet-like superstructure, and only are ZnO fibers obtained in absence of the HAuCl{sub 4}.4H{sub 2}O. The UV-vis absorption spectrum shows two absorptions at 365-390 nm and 480-600 nm, respectively attributing to the characteristic of the ZnO wide-band semiconductor material and the surface plasmon resonance of the Au particles. - Graphical abstract: One-step solvothermal synthesis of novel comet-like superstructures of radially standing ZnO submicron rod arrays.

  13. Development of a process for high capacity arc heater production of silicon for solar arrays

    NASA Technical Reports Server (NTRS)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  14. [Mindfulness--the presentation of psychotherapeutic methods in the process of adaptation to the disease and treatment].

    PubMed

    Syska-Bielak, Anna; Zawadzka, Barbara

    2014-01-01

    The article describes the case of a 21-year-old patient hospitalized because of the diagnosis of acute myeloid leukemia. The purpose of this paper is the presentation of mindfulness training as a method supporting the process of adaptation to the disease and treatment. The results of the psychological analysis indicators showed that at the commencement of psychotherapy patient was in the initial stage of the process of adaptation to the disease and treatment. The ways he coped with the desease exacerbated the negative side effects of treatment. If established, they could lead to abnormal course of adaptation and the emergence of dysfunctional behavior. Therefore it was decided to apply psychotherapy. Training has helped in changing cognitive and emotional experience of the disease and was, in the subjective perception of the patient, a beneficial way to change the regulation of emotion in the healing process. PMID:25344981

  15. Adaptation of the Haloarcula hispanica CRISPR-Cas system to a purified virus strictly requires a priming process.

    PubMed

    Li, Ming; Wang, Rui; Zhao, Dahe; Xiang, Hua

    2014-02-01

    The clustered regularly interspaced short palindromic repeat (CRISPR)-Cas system mediates adaptive immunity against foreign nucleic acids in prokaryotes. However, efficient adaptation of a native CRISPR to purified viruses has only been observed for the type II-A system from a Streptococcus thermophilus industry strain, and rarely reported for laboratory strains. Here, we provide a second native system showing efficient adaptation. Infected by a newly isolated virus HHPV-2, Haloarcula hispanica type I-B CRISPR system acquired spacers discriminatively from viral sequences. Unexpectedly, in addition to Cas1, Cas2 and Cas4, this process also requires Cas3 and at least partial Cascade proteins, which are involved in interference and/or CRISPR RNA maturation. Intriguingly, a preexisting spacer partially matching a viral sequence is also required, and spacer acquisition from upstream and downstream sequences of its target sequence (i.e. priming protospacer) shows different strand bias. These evidences strongly indicate that adaptation in this system strictly requires a priming process. This requirement, if validated also true for other CRISPR systems as implied by our bioinformatic analysis, may help to explain failures to observe efficient adaptation to purified viruses in many laboratory strains, and the discrimination mechanism at the adaptation level that has confused scientists for years.

  16. Adaptive three-dimensional range-crossrange-frequency filter processing string for sea mine classification in side scan sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernandez, Manuel F.; Dobeck, Gerald J.

    1997-07-01

    An automatic, robust, adaptive clutter suppression, predetection level fusion, sea mine detection and classification processing string has been developed and applied to shallow water side-scan sonar imagery data. The overall processing string includes pre-processing string includes pre-processing, adaptive clutter filtering (ACF), 2D normalization, detection, feature extraction and classification processing blocks. The pre-processing block contains automatic gain control, data decimation and data alignment processing. The ACF is a multi-dimensional adaptive linear FIR filter, optimal in the least squares sense, for simultaneous background clutter suppression and preservation of an average peak target signature. After data alignment, using a 3D ACF enables simultaneous multiple frequency data fusion and clutter suppression in the composite frequency-range-crossrange domain. Following 2D normalization, the detection consists of thresholding, clustering of exceedances and limiting their number. Finally, features are extracted and a orthogonalization transformation is applied to the data, enabling an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule. The utility of the overall processing string was demonstrated with two side-scan sonar data sets. The ACF, feature orthogonalization, LLRT-based classification processing string provided average probability of correct mine classification and false alarm rate performance exceeding the one obtained when utilizing an expert sonar operator. The overall processing string can be easily implemented in real-time using COTS technology.

  17. Radar imaging and high-resolution array processing applied to a classical VHF-ST profiler

    NASA Astrophysics Data System (ADS)

    Hélal, D.; Crochet, M.; Luce, H.; Spano, E.

    2001-01-01

    Among the spaced antenna methods used in the field of atmospheric studies, radar interferometry has been of great interest for many authors. A first approach is to use the phase information contained in the cross-spectra between antenna output signals and to retrieve direction of arrival (DOA) of discrete scatterers. The second one introduces a phase shift between the antenna signals in order to steer the main beam of the antenna towards a desired direction. This paper deals with the later technique and presents a variant of postset beam steering (PBS) which does not require a multi-receiver system. Indeed, the data samples are taken alternately on each antenna by means of high-commutation-rate switches inserted before a unique receiver. This low-cost technique is called ``sequential PBS'' (SPBS) and has been implemented on two classical VHF-ST radars. The present paper shows that high flexibility of SPBS in angular scanning allows to perform radar imaging. Despite a limited maximum range due to the antennas' scanning, the collected data give a view of the boundary layer and the lower troposphere over a wide horizontal extent, with characteristic horizontally stratified structures in the lower troposphere. These structures are also detected by application of high-resolution imaging processing such as Capon's beamforming or Multiple Signal Classification algorithm. The proposed method can be a simple way to enhance the versatility of classical DBS radars in order to extend them for multi-sensor applications and local meteorology.

  18. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    NASA Technical Reports Server (NTRS)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  19. Parallel processing of Eulerian-Lagrangian, cell-based adaptive method for moving boundary problems

    NASA Astrophysics Data System (ADS)

    Kuan, Chih-Kuang

    In this study, issues and techniques related to the parallel processing of the Eulerian-Lagrangian method for multi-scale moving boundary computation are investigated. The scope of the study consists of the Eulerian approach for field equations, explicit interface-tracking, Lagrangian interface modification and reconstruction algorithms, and a cell-based unstructured adaptive mesh refinement (AMR) in a distributed-memory computation framework. We decomposed the Eulerian domain spatially along with AMR to balance the computational load of solving field equations, which is a primary cost of the entire solver. The Lagrangian domain is partitioned based on marker vicinities with respect to the Eulerian partitions to minimize inter-processor communication. Overall, the performance of an Eulerian task peaks at 10,000-20,000 cells per processor, and it is the upper bound of the performance of the Eulerian- Lagrangian method. Moreover, the load imbalance of the Lagrangian task is not as influential as the communication overhead of the Eulerian-Lagrangian tasks on the overall performance. To assess the parallel processing capabilities, a high Weber number drop collision is simulated. The high convective to viscous length scale ratios result in disparate length scale distributions; together with the moving and topologically irregular interfaces, the computational tasks require temporally and spatially resolved treatment adaptively. The techniques presented enable us to perform original studies to meet such computational requirements. Coalescence, stretch, and break-up of satellite droplets due to the interfacial instability are observed in current study, and the history of interface evolution is in good agreement with the experimental data. The competing mechanisms of the primary and secondary droplet break up, along with the gas-liquid interfacial dynamics are systematically investigated. This study shows that Rayleigh-Taylor instability on the edge of an extruding sheet

  20. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.