Science.gov

Sample records for adaptive array processing

  1. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  2. Motion compensation for adaptive horizontal line array processing

    NASA Astrophysics Data System (ADS)

    Yang, T. C.

    2003-01-01

    Large aperture horizontal line arrays have small resolution cells and can be used to separate a target signal from an interference signal by array beamforming. High-resolution adaptive array processing can be used to place a null at the interference signal so that the array gain can be much higher than that of conventional beamforming. But these nice features are significantly degraded by the source motion, which reduces the time period under which the environment can be considered stationary from the array processing point of view. For adaptive array processing, a large number of data samples are generally required to minimize the variance of the cross-spectral density, or the covariance matrix, between the array elements. For a moving source and interference, the penalty of integrating over a large number of samples is the spread of signal and interference energy to more than one or two eigenvalues. The signal and interference are no longer clearly identified by the eigenvectors and, consequently, the ability to suppress the interference suffers. We show in this paper that the effect of source motion can be compensated for the (signal) beam covariance matrix, thus allowing integration over a large number of data samples without loss in the signal beam power. We employ an equivalent of a rotating coordinate frame to track the signal bearing change and use the waveguide invariant theory to compensate the signal range change by frequency shifting.

  3. Geophysical Inversion with Adaptive Array Processing of Ambient Noise

    NASA Astrophysics Data System (ADS)

    Traer, James

    2011-12-01

    Land-based seismic observations of microseisms generated during Tropical Storms Ernesto and Florence are dominated by signals in the 0.15--0.5Hz band. Data from seafloor hydrophones in shallow water (70m depth, 130 km off the New Jersey coast) show dominant signals in the gravity-wave frequency band, 0.02--0.18Hz and low amplitudes from 0.18--0.3Hz, suggesting significant opposing wave components necessary for DF microseism generation were negligible at the site. Both storms produced similar spectra, despite differing sizes, suggesting near-coastal shallow water as the dominant region for observed microseism generation. A mathematical explanation for a sign-inversion induced to the passive fathometer response by minimum variance distortionless response (MVDR) beamforming is presented. This shows that, in the region containing the bottom reflection, the MVDR fathometer response is identical to that obtained with conventional processing multiplied by a negative factor. A model is presented for the complete passive fathometer response to ocean surface noise, interfering discrete noise sources, and locally uncorrelated noise in an ideal waveguide. The leading order term of the ocean surface noise produces the cross-correlation of vertical multipaths and yields the depth of sub-bottom reflectors. Discrete noise incident on the array via multipaths give multiple peaks in the fathometer response. These peaks may obscure the sub-bottom reflections but can be attenuated with use of Minimum Variance Distortionless Response (MVDR) steering vectors. A theory is presented for the Signal-to-Noise-Ratio (SNR) for the seabed reflection peak in the passive fathometer response as a function of seabed depth, seabed reflection coefficient, averaging time, bandwidth and spatial directivity of the noise field. The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004, with 0.34s of averaging time. In the 2004

  4. Adaptive passive fathometer processing using ambient noise received by vertical nested array

    NASA Astrophysics Data System (ADS)

    Kim, Junghun; Cho, Sungho; Choi, Jee Woong

    2015-07-01

    A passive fathometer technique utilizes surface-generated ambient noise received by a vertical line array as a sound source to estimate the depths of water-sediment interface and sub-bottom layers. Ambient noise was measured using a 24-channel, vertical nested line array consisting of four sub-arrays, in shallow water off the eastern coast of Korea. In this paper, nested array processing is applied to passive fathometer technique to improve the performance. Passive fathometer processing is performed for each sub-array, and the results are then combined to form a passive fathometer output for broadband ambient noise. Three types of beamforming technique, including conventional and two adaptive methods, are used in passive fathometer processing. The results are compared to the depths of water-sediment interface measured by an echo sounder. As a result, it is found that the adaptive methods have better performance than the conventional method.

  5. Ultrasound nondestructive evaluation (NDE) imaging with transducer arrays and adaptive processing.

    PubMed

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated. PMID:22368457

  6. Adaptive arrays for satellite communications

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.; Ksienski, A. A.

    1984-01-01

    The suppression of interfering signals in a satellite communication system was studied. Adaptive arrays are used to suppress interference at the reception site. It is required that the interference be suppressed to very low levels and a modified adaptive circuit is used which accomplishes the desired objective. Techniques for the modification of the transmit patterns to minimize interference with neighboring communication links are explored.

  7. Adaptive identification by systolic arrays. Master's thesis

    SciTech Connect

    Willis, P.A.

    1987-12-01

    This thesis is concerned with the implementation of an adaptive-identification algorithm using parallel processing and systolic arrays. In particular, discrete samples of input and output data of a system with uncertain characteristics are used to determine the parameters of its model. The identification algorithm is based on recursive least squares, QR decomposition, and block-processing techniques with covariance resetting. Along similar lines as previous approaches, the identification process is based on the use of Givens rotations. This approach uses the Cordic algorithm for improved numerical efficiency in performing the rotations. Additionally, floating-point and fixed-point arithmetic implementations are compared.

  8. Optimized micromirror arrays for adaptive optics

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Comtois, John H.; Hetherington, Dale L.

    1999-01-01

    This paper describes the design, layout, fabrication, and surface characterization of highly optimized surface micromachined micromirror devices. Design considerations and fabrication capabilities are presented. These devices are fabricated in the state-of-the-art, four-level, planarized, ultra-low-stress polysilicon process available at Sandia National Laboratories known as the Sandia Ultra-planar Multi-level MEMS Technology (SUMMiT). This enabling process permits the development of micromirror devices with near-ideal characteristics that have previously been unrealizable in standard three-layer polysilicon processes. The reduced 1 μm minimum feature sizes and 0.1 μm mask resolution make it possible to produce dense wiring patterns and irregularly shaped flexures. Likewise, mirror surfaces can be uniquely distributed and segmented in advanced patterns and often irregular shapes in order to minimize wavefront error across the pupil. The ultra-low-stress polysilicon and planarized upper layer allow designers to make larger and more complex micromirrors of varying shape and surface area within an array while maintaining uniform performance of optical surfaces. Powerful layout functions of the AutoCAD editor simplify the design of advanced micromirror arrays and make it possible to optimize devices according to the capabilities of the fabrication process. Micromirrors fabricated in this process have demonstrated a surface variance across the array from only 2-3 nm to a worst case of roughly 25 nm while boasting active surface areas of 98% or better. Combining the process planarization with a ``planarized-by-design'' approach will produce micromirror array surfaces that are limited in flatness only by the surface deposition roughness of the structural material. Ultimately, the combination of advanced process and layout capabilities have permitted the fabrication of highly optimized micromirror arrays for adaptive optics.

  9. Array signal processing

    SciTech Connect

    Haykin, S.; Justice, J.H.; Owsley, N.L.; Yen, J.L.; Kak, A.C.

    1985-01-01

    This is the first book to be devoted completely to array signal processing, a subject that has become increasingly important in recent years. The book consists of six chapters. Chapter 1, which is introductory, reviews some basic concepts in wave propagation. The remaining five chapters deal with the theory and applications of array signal processing in (a) exploration seismology, (b) passive sonar, (c) radar, (d) radio astronomy, and (e) tomographic imaging. The various chapters of the book are self-contained. The book is written by a team of five active researchers, who are specialists in the individual fields covered by the pertinent chapters.

  10. A unified systolic array for adaptive beamforming

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. )

    1990-04-01

    The authors present a new algorithm and systolic array for adaptive beamforming. The authors algorithm uses only orthogonal transformations and thus should have better numerical properties. The algorithm can be implemented on one single p {times} p triangular array of programmable processors that offers a throughput of one residual element per cycle.

  11. Adaptive array antenna for satellite cellular and direct broadcast communications

    NASA Technical Reports Server (NTRS)

    Horton, Charles R.; Abend, Kenneth

    1993-01-01

    Adaptive phased-array antennas provide cost-effective implementation of large, light weight apertures with high directivity and precise beamshape control. Adaptive self-calibration allows for relaxation of all mechanical tolerances across the aperture and electrical component tolerances, providing high performance with a low-cost, lightweight array, even in the presence of large physical distortions. Beam-shape is programmable and adaptable to changes in technical and operational requirements. Adaptive digital beam-forming eliminates uplink contention by allowing a single electronically steerable antenna to service a large number of receivers with beams which adaptively focus on one source while eliminating interference from others. A large, adaptively calibrated and fully programmable aperture can also provide precise beam shape control for power-efficient direct broadcast from space. Advanced adaptive digital beamforming technologies are described for: (1) electronic compensation of aperture distortion, (2) multiple receiver adaptive space-time processing, and (3) downlink beam-shape control. Cost considerations for space-based array applications are also discussed.

  12. Adaptive antenna arrays for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.

    1985-01-01

    The interference protection provided by adaptive antenna arrays to an Earth station or satellite receive antenna system is studied. The case where the interference is caused by the transmission from adjacent satellites or Earth stations whose signals inadverently enter the receiving system and interfere with the communication link is considered. Thus, the interfering signals are very weak. To increase the interference suppression, one can either decrease the thermal noise in the feedback loops or increase the gain of the auxiliary antennas in the interfering signal direction. Both methods are examined. It is shown that one may have to reduce the noise correlation to impractically low values and if directive auxiliary antennas are used, the auxiliary antenna size may have to be too large. One can, however, combine the two methods to achieve the specified interference suppression with reasonable requirements of noise decorrelation and auxiliary antenna size. Effects of the errors in the steering vector on the adaptive array performance are studied.

  13. Adaptive antenna arrays for satellite communication

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1989-01-01

    The feasibility of using adaptive antenna arrays to provide interference protection in satellite communications was studied. The feedback loops as well as the sample matric inversion (SMI) algorithm for weight control were studied. Appropriate modifications in the two were made to achieve the required interference suppression. An experimental system was built to test the modified feedback loops and the modified SMI algorithm. The performance of the experimental system was evaluated using bench generated signals and signals received from TVRO geosynchronous satellites. A summary of results is given. Some suggestions for future work are also presented.

  14. Research in large adaptive antenna arrays

    NASA Technical Reports Server (NTRS)

    Berkowitz, R. S.; Dzekov, T.

    1976-01-01

    The feasibility of microwave holographic imaging of targets near the earth using a large random conformal array on the earth's surface and illumination by a CW source on a geostationary satellite is investigated. A geometrical formulation for the illuminator-target-array relationship is applied to the calculation of signal levels resulting from L-band illumination supplied by a satellite similar to ATS-6. The relations between direct and reflected signals are analyzed and the composite resultant signal seen at each antenna element is described. Processing techniques for developing directional beam formation as well as SNR enhancement are developed. The angular resolution and focusing characteristics of a large array covering an approximately circular area on the ground are determined. The necessary relations are developed between the achievable SNR and the size and number of elements in the array. Numerical results are presented for possible air traffic surveillance system. Finally, a simple phase correlation experiment is defined that can establish how large an array may be constructed.

  15. The CHARA Array Adaptive Optics Program

    NASA Astrophysics Data System (ADS)

    Ten Brummelaar, Theo; Che, Xiao; McAlister, Harold A.; Ireland, Michael; Monnier, John D.; Mourard, Denis; Ridgway, Stephen T.; sturmann, judit; Sturmann, Laszlo; Turner, Nils H.; Tuthill, Peter

    2016-01-01

    The CHARA array is an optical/near infrared interferometer consisting of six 1-meter diameter telescopes the longest baseline of which is 331 meters. With sub-millisecond angular resolution, the CHARA array is able to spatially resolve nearby stellar systems to reveal the detailed structures. To improve the sensitivity and scientific throughput, the CHARA array was funded by NSF-ATI in 2011, and by NSF-MRI in 2015, for an upgrade of adaptive optics (AO) systems to all six telescopes. The initial grant covers Phase I of the adaptive optics system, which includes an on-telescope Wavefront Sensor and non-common-path (NCP) error correction. The WFS use a fairly standard Shack-Hartman design and will initially close the tip tilt servo and log wavefront errors for use in data reduction and calibration. The second grant provides the funding for deformable mirrors for each telescope which will be used closed loop to remove atmospheric aberrations from the beams. There are then over twenty reflections after the WFS at the telescopes that bring the light several hundred meters into the beam combining laboratory. Some of these, including the delay line and beam reducing optics, are powered elements, and some of them, in particular the delay lines and telescope Coude optics, are continually moving. This means that the NCP problems in an interferometer are much greater than those found in more standard telescope systems. A second, slow AO system is required in the laboratory to correct for these NCP errors. We will breifly describe the AO system, and it's current status, as well as discuss the new science enabled by the system with a focus on our YSO program.

  16. Adaptive and mobile ground sensor array.

    SciTech Connect

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer; Maish, Alexander B.

    2003-12-01

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.

  17. Adaptive Detector Arrays for Optical Communications Receivers

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Srinivasan, M.

    2000-01-01

    The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.

  18. Protection of the main maximum in adaptive antenna arrays

    NASA Astrophysics Data System (ADS)

    Pistolkors, A. A.

    1980-12-01

    An adaptive algorithm based on the solution of the problem of minimizing the noise at the output of an array when a constraint is imposed on the main maximum direction is discussed. The suppression depth for the cases of one and two interferences and the enhancement of the direction-finding capability and resolution of an adaptive array are investigated.

  19. Applications of minimum redundancy arrays in adaptive beamforming

    NASA Astrophysics Data System (ADS)

    Fattouche, M.; Nichols, S. T.; Jorgenson, M. B.

    1991-10-01

    It is shown, through analysis and simulation, that the use of a minimum redundancy array (MRA) in conjunction with an adaptive beamformer results in performance superior to that attained by a comparable system based on an array with uniformly spaced elements, or uniform array (UA) in terms of rejecting interferences located in close angular proximity to the look direction. Further, it is demonstrated that choosing the adaptive elements of a thinned adaptive array (TAA) based on a minimum spatial redundancy criterion, rather than spacing them uniformly, results in improved rejection of main lobe interferences, with negligible degradation in sidelobe interference rejection capabilities.

  20. Temperature-adaptive Circuits on Reconfigurable Analog Arrays

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo S.; Keymeulen, Didier; Ramesham, Rajeshuni; Neff, Joseph; Katkoori, Srinivas

    2006-01-01

    This paper describes a new reconfigurable analog array (MA) architecture and integrated circuit (IC) used to map analog circuits that can adapt to extreme temperatures under programmable control. Algorithm-driven adaptation takes place on the RAA IC. The algorithms are implemented in a separate Field Programmable Gate Array (FPGA) IC, co-located with the RAA in the extreme temperature environment. The experiments demonstrate circuit adaptation over a wide temperature range, from extremely low temperature of -180 C to high 120 C.

  1. Acoustic signal processing toolbox for array processing

    NASA Astrophysics Data System (ADS)

    Pham, Tien; Whipps, Gene T.

    2003-08-01

    The US Army Research Laboratory (ARL) has developed an acoustic signal processing toolbox (ASPT) for acoustic sensor array processing. The intent of this document is to describe the toolbox and its uses. The ASPT is a GUI-based software that is developed and runs under MATLAB. The current version, ASPT 3.0, requires MATLAB 6.0 and above. ASPT contains a variety of narrowband (NB) and incoherent and coherent wideband (WB) direction-of-arrival (DOA) estimation and beamforming algorithms that have been researched and developed at ARL. Currently, ASPT contains 16 DOA and beamforming algorithms. It contains several different NB and WB versions of the MVDR, MUSIC and ESPRIT algorithms. In addition, there are a variety of pre-processing, simulation and analysis tools available in the toolbox. The user can perform simulation or real data analysis for all algorithms with user-defined signal model parameters and array geometries.

  2. Effects of additional interfering signals on adaptive array performance

    NASA Technical Reports Server (NTRS)

    Moses, Randolph L.

    1989-01-01

    The effects of additional interference signals on the performance of a fully adaptive array are considered. The case where the number of interference signals exceeds the number of array degrees of freedom is addressed. It is shown how performance is affected as a function of the number of array elements, the number of interference signals, and the directivity of the array antennas. By using directive auxiliary elements, the performance of the array can be as good as the performance when the additional interference signals are not present.

  3. Optimizing Satellite Communications With Adaptive and Phased Array Antennas

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Popovic, Zoya; Langley, John; Barott, William C.; Ahmed, M. Usman; Mandl, Dan

    2004-01-01

    A new adaptive antenna array architecture for low-earth-orbiting satellite ground stations is being investigated. These ground stations are intended to have no moving parts and could potentially be operated in populated areas, where terrestrial interference is likely. The architecture includes multiple, moderately directive phased arrays. The phased arrays, each steered in the approximate direction of the satellite, are adaptively combined to enhance the Signal-to-Noise and Interference-Ratio (SNIR) of the desired satellite. The size of each phased array is to be traded-off with the number of phased arrays, to optimize cost, while meeting a bit-error-rate threshold. Also, two phased array architectures are being prototyped: a spacefed lens array and a reflect-array. If two co-channel satellites are in the field of view of the phased arrays, then multi-user detection techniques may enable simultaneous demodulation of the satellite signals, also known as Space Division Multiple Access (SDMA). We report on Phase I of the project, in which fixed directional elements are adaptively combined in a prototype to demodulate the S-band downlink of the EO-1 satellite, which is part of the New Millennium Program at NASA.

  4. Implementation of LSCMA adaptive array terminal for mobile satellite communications

    NASA Astrophysics Data System (ADS)

    Zhou, Shun; Wang, Huali; Xu, Zhijun

    2007-11-01

    This paper considers the application of adaptive array antenna based on the least squares constant modulus algorithm (LSCMA) for interference rejection in mobile SATCOM terminals. A two-element adaptive array scheme is implemented with a combination of ADI TS201S DSP chips and Altera Stratix II FPGA device, which makes a cooperating computation for adaptive beamforming. Its interference suppressing performance is verified via Matlab simulations. Digital hardware system is implemented to execute the operations of LSCMA beamforming algorithm that is represented by an algorithm flowchart. The result of simulations and test indicate that this scheme can improve the anti-jamming performance of terminals.

  5. Shack-Hartmann wavefront sensor with adaptive holographic lenslet array

    NASA Astrophysics Data System (ADS)

    Podanchuk, Dmytro V.; Dan'ko, Volodymyr P.; Goloborodko, Andrey A.; Sutyagina, Natalia S.

    2009-10-01

    The method of the dynamic range expansion of the Shack-Hartmann wavefront sensor is discussed. It's based on the use of nonlinear dual focus holographic lenslet arrays with the aberration precompensation. The data concerning the optical setup and the technique of adaptive lenslet array producing based on nonlinear holographic recording phenomenon are represented. On the example of spherical wavefronts it is shown, that the use of three lenslet arrays with different amount of the aberration precompensation allows expanding approximately in five times the dynamic range of the sensor four times greater with preserving the specified sensitivity in comparison with the corresponding refractive lenslet array.

  6. Adaptive laser array-receivers for acoustic waves detection

    NASA Astrophysics Data System (ADS)

    Tuovinen, Hemmo; Murray, Todd W.; Krishnaswamy, Sridhar

    2000-05-01

    Interferometric detection systems typically use a single focused laser point receiver for the detection of acoustic waves. In some cases, where optical damage of the structure is of concern, it may be advantageous to distribute the detection laser energy over an area. This can be done, for example, by using a point-array or a line-array probe. Other advantages of an array receiver include directional sensitivity and frequency selectivity. It is important to notice that laser-array reception is possible only with self-referential interferometers. In this paper adaptive array interferometric detection schemes, which are based on wave mixing in photorefractive bismuth silicate crystal, are described. An adaptive narrow-band laser array receiver of surface acoustic waves is demonstrated. The interferometer is also configured as a linearly frequency modulated (chirped) array receiver. The chirped receiver, when excited with a similarly chirped ultrasonic source, allows pulse compression of the ultrasonic signal thus maintaining high temporal resolution. The signal-to-noise ratio for the different array detection schemes are determined and compared. Several applications of laser-array reception are presented.

  7. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  8. NASA Adaptive Multibeam Phased Array (AMPA): An application study

    NASA Technical Reports Server (NTRS)

    Mittra, R.; Lee, S. W.; Gee, W.

    1982-01-01

    The proposed orbital geometry for the adaptive multibeam phased array (AMPA) communication system is reviewed and some of the system's capabilities and preliminary specifications are highlighted. Typical AMPA user link models and calculations are presented, the principal AMPA features are described, and the implementation of the system is demonstrated. System tradeoffs and requirements are discussed. Recommendations are included.

  9. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  10. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  11. The Applicability of Incoherent Array Processing to IMS Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.

    2014-03-01

    The seismic arrays of the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) are highly diverse in size and configuration, with apertures ranging from under 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high-frequency phases lacking coherence between sensors. Pipeline detection algorithms often miss such phases, since they only consider frequencies low enough to allow coherent array processing, and phases that are detected are often attributed qualitatively incorrect backazimuth and slowness estimates. This can result in missed events, due to either a lack of contributing phases or by corruption of event hypotheses by spurious detections. It has been demonstrated previously that continuous spectral estimation can both detect and estimate phases on the largest aperture arrays, with arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity, as is the case for classical f-k analysis, and the ability to estimate slowness vectors requires sufficiently large inter-sensor distances to resolve time-delays between pulses with a period of the order 4-5 s. Spectrogram beampacking works well on five IMS arrays with apertures over 20 km (NOA, AKASG, YKA, WRA, and KURK) without additional post-processing. Seven arrays with 10-20 km aperture (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 s period signal. Even for medium aperture arrays which can provide high-quality coherent slowness estimates, a complementary spectrogram beampacking procedure could act as a quality control by providing non-aliased estimates when the coherent slowness grids display

  12. Techniques for radar imaging using a wideband adaptive array

    NASA Astrophysics Data System (ADS)

    Curry, Mark Andrew

    A microwave imaging approach is simulated and validated experimentally that uses a small, wideband adaptive array. The experimental 12-element linear array and microwave receiver uses stepped frequency CW signals from 2--3 GHz and receives backscattered energy from short range objects in a +/-90° field of view. Discone antenna elements are used due to their wide temporal bandwidth, isotropic azimuth beam pattern and fixed phase center. It is also shown that these antennas have very low mutual coupling, which significantly reduces the calibration requirements. The MUSIC spectrum is used as a calibration tool. Spatial resampling is used to correct the dispersion effects, which if not compensated causes severe reduction in detection and resolution for medium and large off-axis angles. Fourier processing provides range resolution and the minimum variance spectral estimate is employed to resolve constant range targets for improved angular resolution. Spatial smoothing techniques are used to generate signal plus interference covariance matrices at each range bin. Clutter affects the angular resolution of the array due to the increase in rank of the signal plus clutter covariance matrix, whereas at the same time the rank of this matrix is reduced for closely spaced scatterers due to signal coherence. A method is proposed to enhance angular resolution in the presence of clutter by an approximate signal subspace projection (ASSP) that maps the received signal space to a lower effective rank approximation. This projection operator has a scalar control parameter that is a function of the signal and clutter amplitude estimates. These operations are accomplished without using eigendecomposition. The low sidelobe levels allow the imaging of the integrated backscattering from the absorber cones in the chamber. This creates a fairly large clutter signature for testing ASSP. We can easily resolve 2 dihedrals placed at about 70% of a beamwidth apart, with a signal to clutter ratio

  13. Evolutionary Adaptive Discovery of Phased Array Sensor Signal Identification

    SciTech Connect

    Timothy R. McJunkin; Milos Manic

    2011-05-01

    Tomography, used to create images of the internal properties and features of an object, from phased array ultasonics is improved through many sophisiticated methonds of post processing of data. One approach used to improve tomographic results is to prescribe the collection of more data, from different points of few so that data fusion might have a richer data set to work from. This approach can lead to rapid increase in the data needed to be stored and processed. It also does not necessarily lead to have the needed data. This article describes a novel approach to utilizing the data aquired as a basis for adapting the sensors focusing parameters to locate more precisely the features in the material: specifically, two evolutionary methods of autofocusing on a returned signal are coupled with the derivations of the forumulas for spatially locating the feature are given. Test results of the two novel methods of evolutionary based focusing (EBF) illustrate the improved signal strength and correction of the position of feature using the optimized focal timing parameters, called Focused Delay Identification (FoDI).

  14. Reduced beamset adaptive matched field processing

    NASA Astrophysics Data System (ADS)

    Tracey, Brian; Turaga, Srinivas; Lee, Nigel

    2003-04-01

    Matched field processing (MFP) offers the possibility of improved towed array performance at endfire through range/depth discrimination of contacts. One challenge is that arrays with limited vertical aperture can often resolve only a small number of multipath arrivals. This paper explores ways to capture the array resolution by re-parametrizing the set of MFP replicas. A reduced beamset can be created by performing a singular value decomposition on the MFP replica set. Alternatively, clustering techniques can be used to generate MFP cell families, or regions of similar response. These parametrizations are applied to adaptive MFP algorithms to show speed and performance gains. The use of cell families/regions instead of individual MFP cells also provides a framework for increasing the robustness of MFP by defocusing the MFP beamforming operation. The techniques are demonstrated for shallow-water towed array scenarios. [Work sponsored by DARPA-ATO under Air Force Contract No. F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the Department of Defense. Approved for Public Release, Distribution Unlimited.

  15. Study of large adaptive arrays for space technology applications

    NASA Technical Reports Server (NTRS)

    Berkowitz, R. S.; Steinberg, B.; Powers, E.; Lim, T.

    1977-01-01

    The research in large adaptive antenna arrays for space technology applications is reported. Specifically two tasks were considered. The first was a system design study for accurate determination of the positions and the frequencies of sources radiating from the earth's surface that could be used for the rapid location of people or vehicles in distress. This system design study led to a nonrigid array about 8 km in size with means for locating the array element positions, receiving signals from the earth and determining the source locations and frequencies of the transmitting sources. It is concluded that this system design is feasible, and satisfies the desired objectives. The second task was an experiment to determine the largest earthbound array which could simulate a spaceborne experiment. It was determined that an 800 ft array would perform indistinguishably in both locations and it is estimated that one several times larger also would serve satisfactorily. In addition the power density spectrum of the phase difference fluctuations across a large array was measured. It was found that the spectrum falls off approximately as f to the minus 5/2 power.

  16. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    SciTech Connect

    Leung, Daniel

    2011-04-19

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  17. A recurrent neural network for adaptive beamforming and array correction.

    PubMed

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. PMID:27203554

  18. Adaptive multibeam phased array design for a Spacelab experiment

    NASA Technical Reports Server (NTRS)

    Noji, T. T.; Fass, S.; Fuoco, A. M.; Wang, C. D.

    1977-01-01

    The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio.

  19. An adaptive array antenna for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Milne, Robert

    1990-01-01

    The design of an adaptive array antenna for land vehicle operation and its performance in an operational satellite system is described. Linear and circularly polarized antenna designs are presented. The acquisition and tracking operation of a satellite is described and the effect on the communications signal is discussed. A number of system requirements are examined that have a major impact on the antenna design. The results of environmental, power handling, and RFI testing are presented and potential problems are identified.

  20. Sensor array processing for random inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Ringelstein, Joerg; Gershman, Alex B.; Boehme, Johann F.

    1999-11-01

    The performances of high-resolution array processing methods are known to degrade in random inhomogeneous media because the amplitude and phase of each wavefront tend to fluctuate and to loose their coherence between array sensors. As a result, in the presence of such a multiplicative noise, the conventional coherent wavefront model becomes inapplicable. Such a type of degradation may be especially strong for large aperture arrays. Below, we develop new high-resolution covariance matching (CM) techniques with an improved robustness against multiplicative noise and related coherence losses. Using a few unrestrictive physics-based assumptions on the environment, we show that reliable algorithms can be developed which take into account possible coherence losses. Computer simulation results and real sonar data processing results are presented. These results demonstrate drastic improvements achieved by our approach as compared with conventional high- resolution array processing techniques.

  1. Adaptive sensor array algorithm for structural health monitoring of helmet

    NASA Astrophysics Data System (ADS)

    Zou, Xiaotian; Tian, Ye; Wu, Nan; Sun, Kai; Wang, Xingwei

    2011-04-01

    The adaptive neural network is a standard technique used in nonlinear system estimation and learning applications for dynamic models. In this paper, we introduced an adaptive sensor fusion algorithm for a helmet structure health monitoring system. The helmet structure health monitoring system is used to study the effects of ballistic/blast events on the helmet and human skull. Installed inside the helmet system, there is an optical fiber pressure sensors array. After implementing the adaptive estimation algorithm into helmet system, a dynamic model for the sensor array has been developed. The dynamic response characteristics of the sensor network are estimated from the pressure data by applying an adaptive control algorithm using artificial neural network. With the estimated parameters and position data from the dynamic model, the pressure distribution of the whole helmet can be calculated following the Bazier Surface interpolation method. The distribution pattern inside the helmet will be very helpful for improving helmet design to provide better protection to soldiers from head injuries.

  2. Injection monitoring with seismic arrays and adaptive noise cancellation

    SciTech Connect

    Harben, P.E.; Harris, D.B.; Jarpe, S.P.

    1991-01-01

    Although the application of seismic methods, active and passive, to monitor in-situ reservoir stimulation processes is not new, seismic arrays and array processing technology coupled with a new noise cancellation method has not been attempted. Successful application of seismic arrays to passively monitor in-situ reservoir stimulation processes depends on being able to sufficiently cancel the expected large amplitude background seismic noise typical of an oil or geothermal production environment so that small amplitude seismic signals occurring at depth can be detected and located. This report describes the results of a short field experiment conducted to test both the application of seismic arrays for in-situ reservoir stimulation monitoring and the active noise cancellation technique in a real reservoir production environment. Although successful application of these techniques to in-situ reservoir stimulation monitoring would have the greatest payoff in the oil industry, the proof-of-concept field experiment site was chosen to be the Geysers geothermal field in northern California. This site was chosen because of known high seismicity rates, a relatively shallow production depth, cooperation and some cost sharing the UNOCAL Oil Corporation, and the close proximity of the site to LLNL. The body of this report describes the Geysers field experimental configuration and then discusses the results of the seismic array processing and the results of the seismic noise cancellation followed by a brief conclusion. 2 refs., 11 figs.

  3. Howells-Applebaum adaptive superresolution array for accelerated scanning

    NASA Astrophysics Data System (ADS)

    Ohmiya, Manabu; Ogawa, Yasutaka; Itoh, Kiyohiko

    1988-12-01

    An approach is proposed that offers an acclerated scanning rate for a Howells-Applebaum adaptive superresolution array (H-A SRA). Analytical considerations clarify the causes of performance degradation of the H-A SRA at a high scanning rate. Then a suitable steering signal and implementation of an H-A weight control loop (H-A loop) for accelerated scanning are introduced. The weight solution determined by this method is shown to coincide approximately with the optimum Wiener one under some specific signal conditions and antenna parameters. Computer simulations show that the H-A SRA gives much better scanning performance than the conventional array. The system is readily implemented by improving the circuit inserting the steering signal in the H-A loop.

  4. An adaptive array antenna for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Milne, Robert

    1988-01-01

    The adaptive array is linearly polarized and consists essentially of a driven lambda/4 monopole surrounded by an array of parasitic elements all mounted on a ground plane of finite size. The parasitic elements are all connected to ground via pin diodes. By applying suitable bias voltages, the desired parasitic elements can be activated and made highly reflective. The directivity and pointing of the antenna beam can be controlled in both the azimuth and elevation planes using high speed digital switching techniques. The antenna RF losses are neglible and the maximum gain is close to the theoretical value determined by the effective aperture size. The antenna is compact, has a low profile, is inexpensive to manufacture and can handle high transmitter power.

  5. Applications of trimode waveguide feeds in adaptive virtual array antennas

    NASA Astrophysics Data System (ADS)

    Allahgholi Pour, Z.; Shafai, Lotfollah

    2015-03-01

    This paper presents the formation of an adaptive virtual array antenna in a symmetric parabolic reflector antenna illuminated by trimode circular waveguide feeds with different mode alignments. The modes of interest are the TE11, TE21, and TM01 type modes. The terms TE and TM stand for the transverse electric and transverse magnetic modes, respectively. By appropriately exciting these modes and varying the mode orientations inside the primary feed, the effective source of radiation is displaced on the reflector aperture, while the resulting secondary patterns remain axial. Different antenna parameters such as gain, cross polarization, and phase center locations are investigated. It is demonstrated that the extra third mode facilitates the formation of symmetric virtual array antennas with reasonable cross polarization discriminations at the diagonal plane.

  6. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, Param H.; Hunt, Arlon J.

    1986-01-01

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO.sub.2, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO.sub.2, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40.degree. C. instead of at about 270.degree. C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementry particles or cosmic rays.

  7. Process for forming transparent aerogel insulating arrays

    DOEpatents

    Tewari, P.H.; Hunt, A.J.

    1985-09-04

    An improved supercritical drying process for forming transparent silica aerogel arrays is described. The process is of the type utilizing the steps of hydrolyzing and condensing aloxides to form alcogels. A subsequent step removes the alcohol to form aerogels. The improvement includes the additional step, after alcogels are formed, of substituting a solvent, such as CO/sub 2/, for the alcohol in the alcogels, the solvent having a critical temperature less than the critical temperature of the alcohol. The resulting gels are dried at a supercritical temperature for the selected solvent, such as CO/sub 2/, to thereby provide a transparent aerogel array within a substantially reduced (days-to-hours) time period. The supercritical drying occurs at about 40/sup 0/C instead of at about 270/sup 0/C. The improved process provides increased yields of large scale, structurally sound arrays. The transparent aerogel array, formed in sheets or slabs, as made in accordance with the improved process, can replace the air gap within a double glazed window, for example, to provide a substantial reduction in heat transfer. The thus formed transparent aerogel arrays may also be utilized, for example, in windows of refrigerators and ovens, or in the walls and doors thereof or as the active material in detectors for analyzing high energy elementary particles or cosmic rays.

  8. Ultra wideband photonic control of an adaptive phased array antenna

    NASA Astrophysics Data System (ADS)

    Cox, Joseph L.; Zmuda, Henry; Li, Jian; Sforza, Pasquale M.

    2006-05-01

    This paper presents a new concept for a photonic implementation of a time reversed RF antenna array beamforming system. The process does not require analog to digital conversion to implement and is therefore particularly suited for high bandwidth applications. Significantly, propagation distortion due to atmospheric effects, clutter, etc. is automatically accounted for with the time reversal process. The approach utilizes the reflection of an initial interrogation signal from off an extended target to precisely time match the radiating elements of the array so as to re-radiate signals precisely back to the target's location. The backscattered signal(s) from the desired location is captured by each antenna and used to modulate a pulsed laser. An electrooptic switch acts as a time gate to eliminate any unwanted signals such as those reflected from other targets whose range is different from that of the desired location resulting in a spatial null at that location. A chromatic dispersion processor is used to extract the exact array parameters of the received signal location. Hence, other than an approximate knowledge of the steering direction needed only to approximately establish the time gating, no knowledge of the target position is required, and hence no knowledge of the array element time delay is required. Target motion and/or array element jitter is automatically accounted for. This paper presents the preliminary study of the photonic processor, analytical justification, and simulated results. The technology has a broad range of applications including aerospace and defense and in medical imaging.

  9. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  10. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  11. Adaptive Transthoracic Refocusing of Dual-Mode Ultrasound Arrays

    PubMed Central

    Casper, Andrew J.; Wan, Yayun; Ebbini, Emad S.

    2010-01-01

    We present experimental validation results of an adaptive, image-based refocusing algorithm of dual-mode ultrasound arrays (DMUAs) in the presence of strongly scattering objects. This study is motivated by the need to develop noninvasive techniques for therapeutic targeting of tumors seated in organs where the therapeutic beam is partially obstructed by the ribcage, e.g., liver and kidney. We have developed an algorithm that takes advantage of the imaging capabilities of DMUAs to identify the ribs and the intercostals within the path of the therapeutic beam to produce a specified power deposition at the target while minimizing the exposure at the rib locations. This image-based refocusing algorithm takes advantage of the inherent registration between the imaging and therapeutic coordinate systems of DMUAs in the estimation of array directivity vectors at the target and rib locations. These directivity vectors are then used in solving a constrained optimization problem allowing for adaptive refocusing, directing the acoustical energy through the intercostals, and avoiding the rib locations. The experimental validation study utilized a 1-MHz, 64-element DMUA in focusing through a block of tissue-mimicking phantom [0.5 dB/(cm·MHz)] with embedded Plexiglas ribs. Single transmit focus (STF) images obtained with the DMUA were used for image-guided selection of the critical and target points to be used for adaptive refocusing. Experimental results show that the echogenicity of the ribs in STF images provide feedback on the reduction of power deposition at rib locations. This was confirmed by direct comparison of measured temperature rise and integrated backscatter at the rib locations. Direct temperature measurements also confirm the improved power deposition at the target and the reduction in power deposition at the rib locations. Finally, we have compared the quality of the image-based adaptive refocusing algorithm with a phase-conjugation solution obtained by direct

  12. Cylindrical Antenna With Partly Adaptive Phased-Array Feed

    NASA Technical Reports Server (NTRS)

    Hussein, Ziad; Hilland, Jeff

    2003-01-01

    A proposed design for a phased-array fed cylindrical-reflector microwave antenna would enable enhancement of the radiation pattern through partially adaptive amplitude and phase control of its edge radiating feed elements. Antennas based on this design concept would be attractive for use in radar (especially synthetic-aperture radar) and other systems that could exploit electronic directional scanning and in which there are requirements for specially shaped radiation patterns, including ones with low side lobes. One notable advantage of this design concept is that the transmitter/ receiver modules feeding all the elements except the edge ones could be identical and, as a result, the antenna would cost less than in the cases of prior design concepts in which these elements may not be identical.

  13. Adaptive Arrays for Weak Interfering Signals: An Experimental System. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ward, James

    1987-01-01

    An experimental adaptive antenna system was implemented to study the performance of adaptive arrays in the presence of weak interfering signals. It is a sidelobe canceler with two auxiliary elements. Modified feedback loops, which decorrelate the noise components of the two inputs to the loop correlators, control the array weights. Digital processing is used for algorithm implementation and performance evaluation. The results show that the system can suppress interfering signals which are 0 to 10 dB below the thermal noise level in the main channel by 20 to 30 dB. When the desired signal is strong in the auxiliary elements the amount of interference suppression decreases. The amount of degradation depends on the number of interfering signals incident on the communication system. A modified steering vector which overcomes this problem is proposed.

  14. Plenoptic processing methods for distributed camera arrays

    NASA Astrophysics Data System (ADS)

    Boyle, Frank A.; Yancey, Jerry W.; Maleh, Ray; Deignan, Paul

    2011-05-01

    Recent advances in digital photography have enabled the development and demonstration of plenoptic cameras with impressive capabilities. They function by recording sub-aperture images that can be combined to re-focus images or to generate stereoscopic pairs. Plenoptic methods are being explored for fusing images from distributed arrays of cameras, with a view toward applications in which hardware resources are limited (e.g. size, weight, power constraints). Through computer simulation and experimental studies, the influences of non-idealities such as camera position uncertainty are being considered. Component image rescaling and balancing methods are being explored to compensate. Of interest is the impact on precision passive ranging and super-resolution. In a preliminary experiment, a set of images from a camera array was recorded and merged to form a 3D representation of a scene. Conventional plenoptic refocusing was demonstrated and techniques were explored for balancing the images. Nonlinear methods were explored for combining the images limited the ghosting caused by sub-sampling. Plenoptic processing was explored as a means for determining 3D information from airborne video. Successive frames were processed as camera array elements to extract the heights of structures. Practical means were considered for rendering the 3D information in color.

  15. Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    SciTech Connect

    Ringdal, F; Harris, D B; Dodge, D; Gibbons, S J

    2009-07-23

    extend detection to lower magnitudes. This year we addressed a problem long known to limit the acceptance of correlation detectors in practice: the labor intensive development of templates. For example, existing design methods cannot keep pace with rapidly unfolding aftershock sequences. We successfully built and tested an object-oriented framework (as described in our 2005 proposal) for autonomous calibration of waveform correlation detectors for an array. The framework contains a dynamic list of detectors of several types operating on a continuous array data stream. The list has permanent detectors: beam forming power (STA/LTA) detectors which serve the purpose of detecting signals not yet characterized with a waveform template. The framework also contains an arbitrary number of subspace detectors which are launched automatically using the waveforms from validated power detections as templates. The implementation is very efficient such that the computational cost of adding subspace detectors was low. The framework contains a supervisor that oversees the validation of power detections, and periodically halts the processing to revise the portfolio of detectors. The process of revision consists of collecting the waveforms from all detections, performing cross-correlations pairwise among all waveforms, clustering the detections using correlations as a distance measure, then creating a new subspace detector from each cluster. The collection of new subspace detectors replaces the existing portfolio and processing of the data stream resumes. This elaborate scheme was implemented to prevent proliferation of closely-related subspace detectors. The method performed very well on several simple sequences: 2005 'drumbeat' events observed locally at Mt. St. Helens, and the 2003 Orinda, CA aftershock sequence. Our principal test entailed detection of the aftershocks of the San Simeon earthquake using the NVAR array; in this case, the system automatically detected and categorized

  16. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  17. Color filter array demosaicing: an adaptive progressive interpolation based on the edge type

    NASA Astrophysics Data System (ADS)

    Dong, Qiqi; Liu, Zhaohui

    2015-10-01

    Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.

  18. Gallium arsenide processing for gate array logic

    NASA Technical Reports Server (NTRS)

    Cole, Eric D.

    1989-01-01

    The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.

  19. Optical implementation of systolic array processing

    NASA Technical Reports Server (NTRS)

    Caulfield, H. J.; Rhodes, W. T.; Foster, M. J.; Horvitz, S.

    1981-01-01

    Algorithms for matrix vector multiplication are implemented using acousto-optic cells for multiplication and input data transfer and using charge coupled devices detector arrays for accumulation and output of the results. No two dimensional matrix mask is required; matrix changes are implemented electronically. A system for multiplying a 50 component nonnegative real vector by a 50 by 50 nonnegative real matrix is described. Modifications for bipolar real and complex valued processing are possible, as are extensions to matrix-matrix multiplication and multiplication of a vector by multiple matrices.

  20. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    SciTech Connect

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  1. Adaptive beamforming of a towed array during maneuvering

    NASA Astrophysics Data System (ADS)

    Gong, Zaixiao; Lin, Peng; Guo, Yonggang; Zhang, Renhe; Li, Fenghua

    2012-11-01

    During maneuvering, the performance of Minimum Variance Distortion-less Response (MVDR) beamforming for a towed hydrophone array will greatly degrade due to shape error. Under the assumption that the shape of a towed array changes in a known way during the observation interval, an improved MVDR method is proposed. A static array with average shape during the observation interval is taken as a reference array shape. The phase difference of the cross spectral density matrix (CSDM) between the time-varying array and the reference array is compensated on each azimuth. A coherent CSDM accumulation can then be achieved. Experimental results show that the improved MVDR method can yield better performance than conventional MVDR with a time-varying array. This helps to resolve the problems of left-right target ambiguity and weak signal detection for time-varying arrays.

  2. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  3. Array signal processing in the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  4. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  5. Kalman filter-based microphone array signal processing using the equivalent source model

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Chen, Ching-Cheng

    2012-10-01

    This paper demonstrates that microphone array signal processing can be implemented by using adaptive model-based filtering approaches. Nearfield and farfield sound propagation models are formulated into state-space forms in light of the Equivalent Source Method (ESM). In the model, the unknown source amplitudes of the virtual sources are adaptively estimated by using Kalman filters (KFs). The nearfield array aimed at noise source identification is based on a Multiple-Input-Multiple-Output (MIMO) state-space model with minimal realization, whereas the farfield array technique aimed at speech quality enhancement is based on a Single-Input-Multiple-Output (SIMO) state-space model. Performance of the nearfield array is evaluated in terms of relative error of the velocity reconstructed on the actual source surface. Numerical simulations for the nearfield array were conducted with a baffled planar piston source. From the error metric, the proposed KF algorithm proved effective in identifying noise sources. Objective simulations and subjective experiments are undertaken to validate the proposed farfield arrays in comparison with two conventional methods. The results of objective tests indicated that the farfield arrays significantly enhanced the speech quality and word recognition rate. The results of subjective tests post-processed with the analysis of variance (ANOVA) and a post-hoc Fisher's least significant difference (LSD) test have shown great promise in the KF-based microphone array signal processing techniques.

  6. Issues critical to the application of adaptive array antennas to missile seekers

    NASA Astrophysics Data System (ADS)

    Trapp, R. L.; Ronnenburg, C. H.

    1983-09-01

    Missile seekers will confront complex and hostile signal environments that can inhibit severely their ability to intercept threatening targets. Dramatic target detection and homing performance improvement in main beam and sidelobe jamming is realizable with a seeker antenna that can optimally adapt, in real time, its response to the signal environment. Adaptive array antennas can be designed to optimize the signal-to-interference-plus-noise ratio by forming pattern nulls directed toward sources of interference while simultaneously maximizing gain in the desired signal direction. Physical and operational missile constraints place severe requirements on an adaptive array. Nevertheless, there are several array configurations and adaptive processors that can satisfy these constraints in the next decade. Technology is a dominant limitation to adaptive array performance in a missile seeker. Signal processors and array implementations using state-of-the-art technology are required. Critical experimentation and representative simulations are needed to establish error effects, preferred adaptive array implementations, detailed requirements, and relative cost estimates. Although an adaptive missile seeker antenna is physically realizable in the next decade, the tradeoffs between cost, complexity, and performance will determine its utility and practicality.

  7. Adaptive-array Electron Cyclotron Emission diagnostics using data streaming in a Software Defined Radio system

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Hamasaki, M.; Fujisawa, A.; Nagashima, Y.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; the QUEST team

    2016-04-01

    Measurement of the Electron Cyclotron Emission (ECE) spectrum is one of the most popular electron temperature diagnostics in nuclear fusion plasma research. A 2-dimensional ECE imaging system was developed with an adaptive-array approach. A radio-frequency (RF) heterodyne detection system with Software Defined Radio (SDR) devices and a phased-array receiver antenna was used to measure the phase and amplitude of the ECE wave. The SDR heterodyne system could continuously measure the phase and amplitude with sufficient accuracy and time resolution while the previous digitizer system could only acquire data at specific times. Robust streaming phase measurements for adaptive-arrayed continuous ECE diagnostics were demonstrated using Fast Fourier Transform (FFT) analysis with the SDR system. The emission field pattern was reconstructed using adaptive-array analysis. The reconstructed profiles were discussed using profiles calculated from coherent single-frequency radiation from the phase array antenna.

  8. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  9. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    PubMed Central

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  10. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  11. Optoelectronic signal processing for phased-array antennas; Proceedings of the Meeting, Los Angeles, CA, Jan. 12, 13, 1988

    NASA Astrophysics Data System (ADS)

    Bhasin, Kul B.; Hendrickson, Brian M.

    1988-01-01

    Papers are presented on fiber optic links for airborne satellite applications, optoelectronic techniques for broadband switching, and GaAs circuits for a monolithic optical controller. Other topics include the optical processing of covariance matrices for adaptive processors, an optical linear heterodyne matrix-vector processor, and an EHF fiber optic-based array. An adaptive optical signal processing architecture using a signed-digit number system is considered along with microwave fiber optic links for phased arrays.

  12. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed. PMID:2180633

  13. NORSAR Final Scientific Report Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    SciTech Connect

    Gibbons, S J; Ringdal, F; Harris, D B

    2009-04-16

    Correlation detection is a relatively new approach in seismology that offers significant advantages in increased sensitivity and event screening over standard energy detection algorithms. The basic concept is that a representative event waveform is used as a template (i.e. matched filter) that is correlated against a continuous, possibly multichannel, data stream to detect new occurrences of that same signal. These algorithms are therefore effective at detecting repeating events, such as explosions and aftershocks at a specific location. This final report summarizes the results of a three-year cooperative project undertaken by NORSAR and Lawrence Livermore National Laboratory. The overall objective has been to develop and test a new advanced, automatic approach to seismic detection using waveform correlation. The principal goal is to develop an adaptive processing algorithm. By this we mean that the detector is initiated using a basic set of reference ('master') events to be used in the correlation process, and then an automatic algorithm is applied successively to provide improved performance by extending the set of master events selectively and strategically. These additional master events are generated by an independent, conventional detection system. A periodic analyst review will then be applied to verify the performance and, if necessary, adjust and consolidate the master event set. A primary focus of this project has been the application of waveform correlation techniques to seismic arrays. The basic procedure is to perform correlation on the individual channels, and then stack the correlation traces using zero-delay beam forming. Array methods such as frequency-wavenumber analysis can be applied to this set of correlation traces to help guarantee the validity of detections and lower the detection threshold. In principle, the deployment of correlation detectors against seismically active regions could involve very large numbers of very specific detectors. To

  14. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    PubMed

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  15. Implementation and use of systolic array processes

    SciTech Connect

    Kung, H.T.

    1983-01-01

    Major effort are now underway to use systolic array processors in large, real-life applications. The author examines various implementation issues and alternatives, the latter from the viewpoints of flexibility and interconnection topologies. He then identifies some work that is essential to the eventual wide use of systolic array processors, such as the development of building blocks, system support and suitable algorithms. 24 references.

  16. MSAT-X phased array antenna adaptions to airborne applications

    NASA Technical Reports Server (NTRS)

    Sparks, C.; Chung, H. H.; Peng, S. Y.

    1988-01-01

    The Mobile Satellite Experiment (MSAT-X) phased array antenna is being modified to meet future requirements. The proposed system consists of two high gain antennas mounted on each side of a fuselage, and a low gain antenna mounted on top of the fuselage. Each antenna is an electronically steered phased array based on the design of the MSAT-X antenna. A beamforming network is connected to the array elements via coaxial cables. It is essential that the proposed antenna system be able to provide an adequate communication link over the required space coverage, which is 360 degrees in azimuth and from 20 degrees below the horizon to the zenith in elevation. Alternative design concepts are suggested. Both open loop and closed loop backup capabilities are discussed. Typical antenna performance data are also included.

  17. Array Processing in the Cloud: the rasdaman Approach

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Dumitru, Alex

    2015-04-01

    The multi-dimensional array data model is gaining more and more attention when dealing with Big Data challenges in a variety of domains such as climate simulations, geographic information systems, medical imaging or astronomical observations. Solutions provided by classical Big Data tools such as Key-Value Stores and MapReduce, as well as traditional relational databases, proved to be limited in domains associated with multi-dimensional data. This problem has been addressed by the field of array databases, in which systems provide database services for raster data, without imposing limitations on the number of dimensions that a dataset can have. Examples of datasets commonly handled by array databases include 1-dimensional sensor data, 2-D satellite imagery, 3-D x/y/t image time series as well as x/y/z geophysical voxel data, and 4-D x/y/z/t weather data. And this can grow as large as simulations of the whole universe when it comes to astrophysics. rasdaman is a well established array database, which implements many optimizations for dealing with large data volumes and operation complexity. Among those, the latest one is intra-query parallelization support: a network of machines collaborate for answering a single array database query, by dividing it into independent sub-queries sent to different servers. This enables massive processing speed-ups, which promise solutions to research challenges on multi-Petabyte data cubes. There are several correlated factors which influence the speedup that intra-query parallelisation brings: the number of servers, the capabilities of each server, the quality of the network, the availability of the data to the server that needs it in order to compute the result and many more. In the effort of adapting the engine to cloud processing patterns, two main components have been identified: one that handles communication and gathers information about the arrays sitting on every server, and a processing unit responsible with dividing work

  18. Multi-microphone adaptive array augmented with visual cueing.

    PubMed

    Gibson, Paul L; Hedin, Dan S; Davies-Venn, Evelyn E; Nelson, Peggy; Kramer, Kevin

    2012-01-01

    We present the development of an audiovisual array that enables hearing aid users to converse with multiple speakers in reverberant environments with significant speech babble noise where their hearing aids do not function well. The system concept consists of a smartphone, a smartphone accessory, and a smartphone software application. The smartphone accessory concept is a multi-microphone audiovisual array in a form factor that allows attachment to the back of the smartphone. The accessory will also contain a lower power radio by which it can transmit audio signals to compatible hearing aids. The smartphone software application concept will use the smartphone's built in camera to acquire images and perform real-time face detection using the built-in face detection support of the smartphone. The audiovisual beamforming algorithm uses the location of talking targets to improve the signal to noise ratio and consequently improve the user's speech intelligibility. Since the proposed array system leverages a handheld consumer electronic device, it will be portable and low cost. A PC based experimental system was developed to demonstrate the feasibility of an audiovisual multi-microphone array and these results are presented. PMID:23366063

  19. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  20. An experimental SMI adaptive antenna array for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Gupta, I. J.

    1989-01-01

    A modified sample matrix inversion (SMI) algorithm designed to increase the suppression of weak interference is implemented on an existing experimental array system. The algorithm itself is fully described as are a number of issues concerning its implementation and evaluation, such as sample scaling, snapshot formation, weight normalization, power calculation, and system calibration. Several experiments show that the steady state performance (i.e., many snapshots are used to calculate the array weights) of the experimental system compares favorably with its theoretical performance. It is demonstrated that standard SMI does not yield adequate suppression of weak interference. Modified SMI is then used to experimentally increase this suppression by as much as 13dB.

  1. Adaptive array technique for differential-phase reflectometry in QUEST

    SciTech Connect

    Idei, H. Hanada, K.; Zushi, H.; Nagata, K.; Mishra, K.; Itado, T.; Akimoto, R.; Yamamoto, M. K.

    2014-11-15

    A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effect was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect.

  2. Adaptive array technique for differential-phase reflectometry in QUEST.

    PubMed

    Idei, H; Nagata, K; Mishra, K; Yamamoto, M K; Itado, T; Akimoto, R; Hanada, K; Zushi, H

    2014-11-01

    A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effect was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect. PMID:25430255

  3. Adaptive array technique for differential-phase reflectometry in QUESTa)

    NASA Astrophysics Data System (ADS)

    Idei, H.; Nagata, K.; Mishra, K.; Yamamoto, M. K.; Itado, T.; Akimoto, R.; Hanada, K.; Zushi, H.

    2014-11-01

    A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effect was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect.

  4. Array enhanced stochastic resonance: Implications for signal processing

    SciTech Connect

    Inchiosa, M.E.; Bulsara, A.R.; Lindner, J.F.; Meadows, B.K.; Ditto, W.L.

    1996-06-01

    In computer simulations, we enhance the response of a {open_quote}{open_quote}stochastic resonator{close_quote}{close_quote} by coupling it into an array of identical resonators. We relate this array enhanced stochastic resonance (AESR) to the global spatiotemporal dynamics of the array and show how noise and coupling cooperate to organize spatial order, temporal periodicity, and peak output signal-to-noise ratio. We consider the application of AESR to signal processing. {copyright} {ital 1996 American Institute of Physics.}

  5. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  6. Speech intelligibility enhancement using hearing-aid array processing.

    PubMed

    Saunders, G H; Kates, J M

    1997-09-01

    Microphone arrays can improve speech recognition in the noise for hearing-impaired listeners by suppressing interference coming from other than desired signal direction. In a previous paper [J. M. Kates and M. R. Weiss, J. Acoust. Soc. Am. 99, 3138-3148 (1996)], several array-processing techniques were evaluated in two rooms using the AI-weighted array gain as the performance metric. The array consisted of five omnidirectional microphones having uniform 2.5-cm spacing, oriented in the endfire direction. In this paper, the speech intelligibility for two of the array processing techniques, delay-and-sum beamforming and superdirective processing, is evaluated for a group of hearing-impaired subjects. Speech intelligibility was measured using the speech reception threshold (SRT) for spondees and speech intelligibility rating (SIR) for sentence materials. The array performance is compared with that for a single omnidirectional microphone and a single directional microphone having a cardioid response pattern. The SRT and SIR results show that the superdirective array processing was the most effective, followed by the cardioid microphone, the array using delay-and-sum beamforming, and the single omnidirectional microphone. The relative processing ratings do not appear to be strongly affected by the size of the room, and the SRT values determined using isolated spondees are similar to the SIR values produced from continuous discourse. PMID:9301060

  7. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  8. The application of systolic arrays to radar signal processing

    NASA Astrophysics Data System (ADS)

    Spearman, R.; Spracklen, C. T.; Miles, J. H.

    The design of a systolic array processor radar system is examined, and its performance is compared to that of a conventional radar processor. It is shown how systolic arrays can be used to replace the boards of high speed logic normally associated with a high performance radar and to implement all of the normal processing functions associated with such a system. Multifunctional systolic arrays are presented that have the flexibility associated with a general purpose digital processor but the speed associated with fixed function logic arrays.

  9. LEO Download Capacity Analysis for a Network of Adaptive Array Ground Stations

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Barott, William C.; Popovic, Zoya; Rondineau, Sebastien; Langley, John; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Steffes, Paul; Mandl, Dan

    2005-01-01

    To lower costs and reduce latency, a network of adaptive array ground stations, distributed across the United States, is considered for the downlink of a polar-orbiting low earth orbiting (LEO) satellite. Assuming the X-band 105 Mbps transmitter of NASA s Earth Observing 1 (EO-1) satellite with a simple line-of-sight propagation model, the average daily download capacity in bits for a network of adaptive array ground stations is compared to that of a single 11 m dish in Poker Flats, Alaska. Each adaptive array ground station is assumed to have multiple steerable antennas, either mechanically steered dishes or phased arrays that are mechanically steered in azimuth and electronically steered in elevation. Phased array technologies that are being developed for this application are the space-fed lens (SFL) and the reflectarray. Optimization of the different boresight directions of the phased arrays within a ground station is shown to significantly increase capacity; for example, this optimization quadruples the capacity for a ground station with eight SFLs. Several networks comprising only two to three ground stations are shown to meet or exceed the capacity of the big dish, Cutting the data rate by half, which saves modem costs and increases the coverage area of each ground station, is shown to increase the average daily capacity of the network for some configurations.

  10. Contrast Adaptation Implies Two Spatiotemporal Channels but Three Adapting Processes

    ERIC Educational Resources Information Center

    Langley, Keith; Bex, Peter J.

    2007-01-01

    The contrast gain control model of adaptation predicts that the effects of contrast adaptation correlate with contrast sensitivity. This article reports that the effects of high contrast spatiotemporal adaptors are maximum when adapting around 19 Hz, which is a factor of two or more greater than the peak in contrast sensitivity. To explain the…

  11. A simulation study of jammer nulling trade-offs in a reactively steered adaptive array

    NASA Astrophysics Data System (ADS)

    Dinger, R. J.

    1985-02-01

    Antenna arrays that operate at frequencies up to 6 GHz or so on air-launched guided missiles are necessarily compact because of limited space. Research has been in progress since FY82 on a compact adaptive array that uses reactively loaded parasitic elements for pattern control. This report describes a study of this class of array whose purpose is to examine the trade-offs available among number of elements, element spacing, and number of nullable jammers. The research is part of a continuing effort to explore novel radio frequency radiating and receiving structures for application to airborne communications and radar system.

  12. Fabrication of Nanohole Array via Nanodot Array Using Simple Self-Assembly Process of Diblock Copolymer

    NASA Astrophysics Data System (ADS)

    Matsuyama, Tsuyoshi; Kawata, Yoshimasa

    2007-06-01

    We present a simple self-assembly process for fabricating a nanohole array via a nanodot array on a glass substrate by dripping ethanol onto the nanodot array. It is found that well-aligned arrays of nanoholes as well as nanodots are formed on the whole surface of the glass. A dot is transformed into a hole, and the alignment of the nanodots strongly reflects that of the nanoholes. We find that the change in the depth of holes agrees well with the change in the surface energy with the ethanol concentration in the aqueous solution. We believe that the interfacial energy between the nanodots and the dripped ethanol causes the transformation from nanodots into nanoholes. The nanohole arrays are directly applicable to molds for nanopatterned media used in high-density near-field optical data storage. The bit data can be stored and read out using probes with small apertures.

  13. Self-Adaptive System based on Field Programmable Gate Array for Extreme Temperature Electronics

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Zebulum, Ricardo; Rajeshuni, Ramesham; Stoica, Adrian; Katkoori, Srinivas; Graves, Sharon; Novak, Frank; Antill, Charles

    2006-01-01

    In this work, we report the implementation of a self-adaptive system using a field programmable gate array (FPGA) and data converters. The self-adaptive system can autonomously recover the lost functionality of a reconfigurable analog array (RAA) integrated circuit (IC) [3]. Both the RAA IC and the self-adaptive system are operating in extreme temperatures (from 120 C down to -180 C). The RAA IC consists of reconfigurable analog blocks interconnected by several switches and programmable by bias voltages. It implements filters/amplifiers with bandwidth up to 20 MHz. The self-adaptive system controls the RAA IC and is realized on Commercial-Off-The-Shelf (COTS) parts. It implements a basic compensation algorithm that corrects a RAA IC in less than a few milliseconds. Experimental results for the cold temperature environment (down to -180 C) demonstrate the feasibility of this approach.

  14. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  15. Adaptive silver films toward bio-array applications

    NASA Astrophysics Data System (ADS)

    Drachev, Vladimir P.; Narasimhan, Meena L.; Yuan, Hsiao-Kuan; Thoreson, Mark D.; Xie, Yong; Davisson, V. J.; Shalaev, Vladimir M.

    2005-03-01

    Adaptive silver films (ASFs) have been studied as a substrate for protein microarrays. Vacuum evaporated silver films fabricated at certain range of evaporation parameters allow fine rearrangement of the silver nanostructure under protein depositions in buffer solution. Proteins restructure and stabilize the ASF to increase the surface-enhanced Raman scattering (SERS) signal from a monolayer of molecules. Preliminary evidence indicates that the adaptive property of the substrates make them appropriate for protein microarray assays. Head-to-head comparisons with two commercial substrates have been performed. Protein binding was quantified on the microarray using the streptavidinCy3/biotinylated goat IgG protein pair. With fluorescence detection, the performance of ASF substrates was comparable with SuperAldehyde and SuperEpoxy substrates. Additionally, the ASF is also a SERS substrate and this provides an additional tool for analysis. It is found that the SERS spectra of the streptavidinCy5 fluorescence reporter bound to true and bound to false sites show distinct difference.

  16. Coping and adaptation process during puerperium

    PubMed Central

    Muñoz de Rodríguez, Lucy; Ruiz de Cárdenas, Carmen Helena

    2012-01-01

    Introduction: The puerperium is a stage that produces changes and adaptations in women, couples and family. Effective coping, during this stage, depends on the relationship between the demands of stressful or difficult situations and the recourses that the puerperal individual has. Roy (2004), in her Middle Range Theory about the Coping and Adaptation Processing, defines Coping as the ''behavioral and cognitive efforts that a person makes to meet the environment demands''. For the puerperal individual, the correct coping is necessary to maintain her physical and mental well being, especially against situations that can be stressful like breastfeeding and return to work. According to Lazarus and Folkman (1986), a resource for coping is to have someone who receives emotional support, informative and / or tangible. Objective: To review the issue of women coping and adaptation during the puerperium stage and the strategies that enhance this adaptation. Methods: search and selection of database articles: Cochrane, Medline, Ovid, ProQuest, Scielo, and Blackwell Synergy. Other sources: unpublished documents by Roy, published books on Roy´s Model, Websites from of international health organizations. Results: the need to recognize the puerperium as a stage that requires comprehensive care is evident, where nurses must be protagonist with the care offered to women and their families, considering the specific demands of this situation and recourses that promote effective coping and the family, education and health services. PMID:24893059

  17. Ghost artifact cancellation using phased array processing.

    PubMed

    Kellman, P; McVeigh, E R

    2001-08-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638

  18. Array Processing for Radar Clutter Reduction and Imaging of Ice-Bed Interface

    NASA Astrophysics Data System (ADS)

    Gogineni, P.; Leuschen, C.; Li, J.; Hoch, A.; Rodriguez-Morales, F.; Ledford, J.; Jezek, K.

    2007-12-01

    A major challenge in sounding of fast-flowing glaciers in Greenland and Antarctica is surface clutter, which masks weak returns from the ice-bed interface. The surface clutter is also a major problem in sounding and imaging sub-surface interfaces on Mars and other planets. We successfully applied array-processing techniques to reduce clutter and image ice-bed interfaces of polar ice sheets. These techniques and tools have potential applications to planetary observations. We developed a radar with array-processing capability to measure thickness of fast-flowing outlet glaciers and image the ice-bed interface. The radar operates over the frequency range from 140 to 160 MHz with about an 800- Watt peak transmit power with transmit and receive antenna arrays. The radar is designed such that pulse width and duration are programmable. The transmit-antenna array is fed with a beamshaping network to obtain low sidelobes. We designed the receiver such that it can process and digitize signals for each element of an eight- channel array. We collected data over several fast-flowing glaciers using a five-element antenna array, limited by available hardpoints to mount antennas, on a Twin Otter aircraft during the 2006 field season and a four-element array on a NASA P-3 aircraft during the 2007 field season. We used both adaptive and non-adaptive signal-processing algorithms to reduce clutter. We collected data over the Jacobshavn Isbrae and other fast-flowing outlet glaciers, and successfully measured the ice thickness and imaged the ice-bed interface. In this paper, we will provide a brief description of the radar, discuss clutter-reduction algorithms, present sample results, and discuss the application of these techniques to planetary observations.

  19. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Fujisawa, A.; Nagashima, Y.; Hamasaki, M.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; QUEST Team

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis.

  20. Expanding Coherent Array Processing to Larger Apertures Using Empirical Matched Field Processing

    SciTech Connect

    Ringdal, F; Harris, D B; Kvaerna, T; Gibbons, S J

    2009-07-23

    We have adapted matched field processing, a method developed in underwater acoustics to detect and locate targets, to classify transient seismic signals arising from mining explosions. Matched field processing, as we apply it, is an empirical technique, using observations of historic events to calibrate the amplitude and phase structure of wavefields incident upon an array aperture for particular repeating sources. The objective of this project is to determine how broadly applicable the method is and to understand the phenomena that control its performance. We obtained our original results in distinguishing events from ten mines in the Khibiny and Olenegorsk mining districts of the Kola Peninsula, for which we had exceptional ground truth information. In a cross-validation test, some 98.2% of 549 explosions were correctly classified by originating mine using just the Pn observations (2.5-12.5 Hz) on the ARCES array at ranges from 350-410 kilometers. These results were achieved despite the fact that the mines are as closely spaced as 3 kilometers. Such classification performance is significantly better than predicted by the Rayleigh limit. Scattering phenomena account for the increased resolution, as we make clear in an analysis of the information carrying capacity of Pn under two alternative propagation scenarios: free-space propagation and propagation with realistic (actually measured) spatial covariance structure. The increase in information capacity over a wide band is captured by the matched field calibrations and used to separate explosions from very closely-spaced sources. In part, the improvement occurs because the calibrations enable coherent processing at frequencies above those normally considered coherent. We are investigating whether similar results can be expected in different regions, with apertures of increasing scale and for diffuse seismicity. We verified similar performance with the closely-spaced Zapolyarni mines, though discovered that it may be

  1. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  2. Real time speech recognition on a distributed digital processing array

    NASA Astrophysics Data System (ADS)

    Simpson, P.; Roberts, J. B. G.

    1983-08-01

    A compact digital signal processor based on the architecture of the ICL Distributed Array Processor (DAP) is under development for MOD applications in Radar, ESM, Image Processing, etc. This Memorandum examines its applicability to speech recognition. In such a distributed processor, optimum mapping of the problem on to the array of processors is vital for efficiency. Three mappings of a dynamic time warping algorithm for isolated word recognition are examined, leading to a feasbile real time capability for continuous speech processing. The compatibility found between dynamic programming methods and this class of machine enlarges the scope of signal processing algorithms foreseen as amenable to parallel processing.

  3. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  4. Sonar array processing borrows from geophysics

    SciTech Connect

    Chen, K.

    1989-09-01

    The author reports a recent advance in sonar signal processing that has potential military application. It improves signal extraction by modifying a technique devised by a geophysicist. Sonar signal processing is used to track submarine and surface targets, such as aircraft carriers, oil tankers, and, in commercial applications, schools of fish or sunken treasure. Similar signal-processing techniques help radio astronomers track galaxies, physicians see images of the body interior, and geophysicists map the ocean floor or find oil. This hydrid technique, applied in an experimental system, can help resolve strong signals as well as weak ones in the same step.

  5. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  6. Application of adaptive optics to scintillation correction in phased array high-frequency radar

    NASA Astrophysics Data System (ADS)

    Theurer, Timothy E.; Bristow, William A.

    2015-06-01

    At high frequency, diffraction during ionospheric propagation can yield wavefronts whose amplitude and phase fluctuate over the physical dimensions of phased array radars such as those of the Super Dual Auroral Radar Network (SuperDARN). Distortion in the wavefront introduces amplitude and phase scintillation into the geometric beamformed signal while reducing radar performance in terms of angular resolution and achieved array gain. A scintillation correction algorithm based on adaptive optics techniques is presented. An experiment conducted using two SuperDARN radars is presented that quantifies the effect of wavefront distortion and demonstrates a reduction in observed scintillation and improvement in radar performance post scintillation correction.

  7. Experimental investigation of the ribbon-array ablation process

    SciTech Connect

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-03-15

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 Multiplication-Sign 10{sup 7} cm/s.

  8. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  9. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  10. Adaptation of photosynthetic processes to stress.

    PubMed

    Berry, J A

    1975-05-01

    I have focused on examples of plant adaptations to environmental conditions that range from adjustments in the allocation of metabolic resources and modification of structural components to entirely separate mechanisms. The result of these modifications is more efficient performance under the stresses typically encountered in the plants' native habitats. Such adaptations, for reasons which are not entirely clear, often lead to poorer performance in other environmental conditions. This situation may be a fundamental basis for the tendency toward specialization among plants native to specific niches or habitats. The evolutionary mechanisms that have resulted in these specializations are very large-scale processes. It seems reasonable to suppose that the plants native to particular habitats are relatively efficient in terms of the limitations imposed by those habitats, and that the adaptive mechanisms these plants possess are, compared to those which have evolved in competing organisms, the most succesful biological means of coping with the environmental stresses encountered. I believe that we can learn from nature and utilize the adaptive mechanisms of these plants in agriculture to replace in part our present reliance on resources and energy to modify the environment for plant growth. By analogy with natural systems, improved resource utilization will require specialization and greater knowledge of the limitations of a particular environment and plant genotype. For example, the cultural conditions, plant architecture, and physiological responses necessary to achieve high water use efficiency from our crop species with C(4) photosynthesis probably differ from those required to achieve maximum total growth. Also, efforts to control water application to eliminate waste carry with them the risk that the crop could be injured by inadequate water. Thus, greater demands would be placed on the crop physiologist, the plant breeder, and the farmer. Planting and appropriate

  11. Dimpled ball grid array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    A 472 dimpled ball grid array (D-BGA) package has not been used in past space flight environments, therefore it was necessary to develop a process that would yield robust and reliable solder joints. The process developing assembly, inspection and rework techniques, were verified by conducting environmental tests. Since the 472 D-BGA packages passed the above environmental tests within the specifications, the process was successfully developed for space flight electronics.

  12. Parallel Processing of Large Scale Microphone Arrays for Sound Capture

    NASA Astrophysics Data System (ADS)

    Jan, Ea-Ee.

    1995-01-01

    Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be

  13. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    ERIC Educational Resources Information Center

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  14. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  15. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach. PMID:24116535

  16. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  17. Design and programming of systolic array cells for signal processing

    SciTech Connect

    Smith, R.A.W.

    1989-01-01

    This thesis presents a new methodology for the design, simulation, and programming of systolic arrays in which the algorithms and architecture are simultaneously optimized. The algorithms determine the initial architecture, and simulation is used to optimize the architecture. The simulator provides a register-transfer level model of a complete systolic array computation. To establish the validity of this design methodology two novel programmable systolic array cells were designed and programmed. The cells were targeted for applications in high-speed signal processing and associated matrix computations. A two-chip programmable systolic array cell using a 16-bit multiplier-accumulator chip and a semi-custom VLSI controller chip was designed and fabricated. A low chip count allows large arrays to be constructed, but the cell is flexible enough to be a building-block for either one- or two-dimensional systolic arrays. Another more flexible and powerful cell using a 32-bit floating-point processor and a second VLSI controller chip was also designed. It contains several architectural features that are unique in a systolic array cell: (1) each instruction is 32 bits, yet all resources can be updated every cycle, (2) two on-chip interchangeable memories are used, and (3) one input port can be used as either a global or local port. The key issues involved in programming the cells are analyzed in detail. A set of modules is developed which can be used to construct large programs in an effective manner. The utility of this programming approach is demonstrated with several important examples.

  18. Large-Array Signal Processing for Deep-Space Applications

    NASA Astrophysics Data System (ADS)

    Lee, C. H.; Vilnrotter, V.; Satorius, E.; Ye, Z.; Fort, D.; Cheung, K.-M.

    2002-04-01

    This article develops the mathematical models needed to describe the key issues in using an array of antennas for receiving spacecraft signals for DSN applications. The detrimental effects of nearby interfering sources, such as other spacecraft transmissions or natural radio sources within the array's field of view, on signal-to noise ratio (SNR) are determined, atmospheric effects relevant to the arraying problem developed, and two classes of algorithms (multiple signal classification (MUSIC) plus beam forming, and an eigen-based solution) capable of phasing up the array with maximized SNR in the presence of realistic disturbances are evaluated. It is shown that, when convolutionally encoded binary-phase shift keying (BPSK) data modulation is employed on the spacecraft signal, previously developed data pre-processing techniques that partially reconstruct the carrier can be of great benefit to array performance, particularly when strong interfering sources are present. Since this article is concerned mainly with demonstrating the required capabilities for operation under realistic conditions, no attempt has been made to reduce algorithm complexity; the design and evaluation of less complex algorithms with similar capabilities will be addressed in a future article. The performances of the candidate algorithms discussed in this article have been evaluated in terms of the number of symbols needed to achieve a given level of combining loss for different numbers of array elements, and compared on this common basis. It is shown that even the best algorithm requires approximately 25,000 symbols to achieve a combining loss of less than 0.5 dB when 128 antenna elements are employed, but generally 50,000 or more symbols are needed. This is not a serious impediment to successful arraying with high data-rate transmission, but may be of some concern with missions exploring near the edge of our solar system or beyond, where lower data rates may be required.

  19. Application of Seismic Array Processing to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Meng, L.; Allen, R. M.; Ampuero, J. P.

    2013-12-01

    Earthquake early warning (EEW) systems that can issue warnings prior to the arrival of strong ground shaking during an earthquake are essential in mitigating seismic hazard. Many of the currently operating EEW systems work on the basis of empirical magnitude-amplitude/frequency scaling relations for a point source. This approach is of limited effectiveness for large events, such as the 2011 Tohoku-Oki earthquake, for which ignoring finite source effects may result in underestimation of the magnitude. Here, we explore the concept of characterizing rupture dimensions in real time for EEW using clusters of dense low-cost accelerometers located near active faults. Back tracing the waveforms recorded by such arrays allows the estimation of the earthquake rupture size, duration and directivity in real-time, which enables the EEW of M > 7 earthquakes. The concept is demonstrated with the 2004 Parkfield earthquake, one of the few big events (M>6) that have been recorded by a local small-scale seismic array (UPSAR array, Fletcher et al, 2006). We first test the approach against synthetic rupture scenarios constructed by superposition of empirical Green's functions. We find it important to correct for the bias in back azimuth induced by dipping structures beneath the array. We implemented the proposed methodology to the mainshock in a simulated real-time environment. After calibrating the dipping-layer effect with data from smaller events, we obtained an estimated rupture length of 9 km, consistent with the distance between the two main high frequency subevents identified by back-projection using all local stations (Allman and Shearer, 2007). We proposed to deploy small-scale arrays every 30 km along the San Andreas Fault. The array processing is performed in local processing centers at each array. The output is compared with finite fault solutions based on real-time GPS system and then incorporated into the standard ElarmS system. The optimal aperture and array geometry is

  20. Analysis and design of a high power laser adaptive phased array transmitter

    NASA Technical Reports Server (NTRS)

    Mevers, G. E.; Soohoo, J. F.; Winocur, J.; Massie, N. A.; Southwell, W. H.; Brandewie, R. A.; Hayes, C. L.

    1977-01-01

    The feasibility of delivering substantial quantities of optical power to a satellite in low earth orbit from a ground based high energy laser (HEL) coupled to an adaptive antenna was investigated. Diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming were examined. To evaluate possible HEL sources, atmospheric investigations were performed for the CO2, (C-12)(O-18)2 isotope, CO and DF wavelengths using output antenna locations of both sea level and mountain top. Results indicate that both excellent atmospheric and adaption efficiency can be obtained for mountain top operation with a micron isotope laser operating at 9.1 um, or a CO laser operating single line (P10) at about 5.0 (C-12)(O-18)2um, which was a close second in the evaluation. Four adaptive power transmitter system concepts were generated and evaluated, based on overall system efficiency, reliability, size and weight, advanced technology requirements and potential cost. A multiple source phased array was selected for detailed conceptual design. The system uses a unique adaption technique of phase locking independent laser oscillators which allows it to be both relatively inexpensive and most reliable with a predicted overall power transfer efficiency of 53%.

  1. Processing difficulties and instability of carbohydrate microneedle arrays

    PubMed Central

    Donnelly, Ryan F.; Morrow, Desmond I.J.; Singh, Thakur R.R.; Migalska, Katarzyna; McCarron, Paul A.; O’Mahony, Conor; Woolfson, A. David

    2010-01-01

    Background A number of reports have suggested that many of the problems currently associated with the use of microneedle (MN) arrays for transdermal drug delivery could be addressed by using drug-loaded MN arrays prepared by moulding hot melts of carbohydrate materials. Methods In this study, we explored the processing, handling, and storage of MN arrays prepared from galactose with a view to clinical application. Results Galactose required a high processing temperature (160°C), and molten galactose was difficult to work with. Substantial losses of the model drugs 5-aminolevulinic acid (ALA) and bovine serum albumin were incurred during processing. While relatively small forces caused significant reductions in MN height when applied to an aluminium block, this was not observed during their relatively facile insertion into heat-stripped epidermis. Drug release experiments using ALA-loaded MN arrays revealed that less than 0.05% of the total drug loading was released across a model silicone membrane. Similarly, only low amounts of ALA (approximately 0.13%) and undetectable amounts of bovine serum albumin were delivered when galactose arrays were combined with aqueous vehicles. Microscopic inspection of the membrane following release studies revealed that no holes could be observed in the membrane, indicating that the partially dissolved galactose sealed the MN-induced holes, thus limiting drug delivery. Indeed, depth penetration studies into excised porcine skin revealed that there was no significant increase in ALA delivery using galactose MN arrays, compared to control (P value < 0.05). Galactose MNs were unstable at ambient relative humidities and became adhesive. Conclusion The processing difficulties and instability encountered in this study are likely to preclude successful clinical application of carbohydrate MNs. The findings of this study are of particular importance to those in the pharmaceutical industry involved in the design and formulation of

  2. Flood adaptive traits and processes: an overview.

    PubMed

    Voesenek, Laurentius A C J; Bailey-Serres, Julia

    2015-04-01

    Unanticipated flooding challenges plant growth and fitness in natural and agricultural ecosystems. Here we describe mechanisms of developmental plasticity and metabolic modulation that underpin adaptive traits and acclimation responses to waterlogging of root systems and submergence of aerial tissues. This includes insights into processes that enhance ventilation of submerged organs. At the intersection between metabolism and growth, submergence survival strategies have evolved involving an ethylene-driven and gibberellin-enhanced module that regulates growth of submerged organs. Opposing regulation of this pathway is facilitated by a subgroup of ethylene-response transcription factors (ERFs), which include members that require low O₂ or low nitric oxide (NO) conditions for their stabilization. These transcription factors control genes encoding enzymes required for anaerobic metabolism as well as proteins that fine-tune their function in transcription and turnover. Other mechanisms that control metabolism and growth at seed, seedling and mature stages under flooding conditions are reviewed, as well as findings demonstrating that true endurance of submergence includes an ability to restore growth following the deluge. Finally, we highlight molecular insights obtained from natural variation of domesticated and wild species that occupy different hydrological niches, emphasizing the value of understanding natural flooding survival strategies in efforts to stabilize crop yields in flood-prone environments. PMID:25580769

  3. Adaptive non-uniformity correction method based on temperature for infrared detector array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo

    2013-09-01

    The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.

  4. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable. PMID:26737215

  5. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  6. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  7. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  8. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  9. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  10. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  11. Multiplexed optical operation of nanoelectromechanical systems (NEMS) arrays for sensing and signal-processing applications

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin

    2014-06-01

    NEMS are rapidly being developed for a variety of sensing applications as well as for exploring interesting regimes in fundamental physics. In most of these endeavors, operation of a NEMS device involves actuating the device harmonically around its fundamental resonance and detecting subsequent motion while the device interacts with its environment. Even though a single NEMS resonator is exceptionally sensitive, a typical application, such as sensing or signal processing, requires the detection of signals from many resonators distributed over the surface of a chip. Therefore, one of the key technological challenges in the field of NEMS is development of multiplexed measurement techniques to detect the motion of a large number of NEMS resonators simultaneously. In this work, we address the important and difficult problem of interfacing with a large number of NEMS devices and facilitating the use of such arrays in, for example, sensing and signal processing applications. We report a versatile, all-optical technique to excite and read-out a distributed NEMS array. The NEMS array is driven by a distributed, intensity-modulated, optical pump through the photothermal effect. The ensuing vibrational response of the array is multiplexed onto a single, probe beam as a high-frequency phase modulation. The phase modulation is optically down converted to a low-frequency, intensity modulation using an adaptive full -field interferometer, and subsequently is detected using a charge-coupled device (CCD) array. Rapid and single-step mechanical characterization of approximately 60 nominally identical, high-frequency resonators is demonstrated. The technique may enable sensitivity improvements over single NEMS resonators by averaging signals coming from a multitude of devices in the array. In addition, the diffraction-limited spatial resolution may allow for position-dependent read-out of NEMS sensor chips for sensing multiple analytes or spatially inhomogeneous forces.

  12. Adaptive optics for array telescopes using piston-and-tilt wave-front sensing

    NASA Technical Reports Server (NTRS)

    Wizinowich, P.; Mcleod, B.; Lloyd-Yhart, M.; Angel, J. R. P.; Colucci, D.; Dekany, R.; Mccarthy, D.; Wittman, D.; Scott-Fleming, I.

    1992-01-01

    A near-infrared adaptive optics system operating at about 50 Hz has been used to control phase errors adaptively between two mirrors of the Multiple Mirror Telescope by stabilizing the position of the interference fringe in the combined unresolved far-field image. The resultant integrated images have angular resolutions of better than 0.1 arcsec and fringe contrasts of more than 0.6. Measurements of wave-front tilt have confirmed the wavelength independence of image motion. These results show that interferometric sensing of phase errors, when combined with a system for sensing the wave-front tilt of the individual telescopes, will provide a means of achieving a stable diffraction-limited focus with segmented telescopes or arrays of telescopes.

  13. Adaptation Processes in Chinese: Word Formation.

    ERIC Educational Resources Information Center

    Pasierbsky, Fritz

    The typical pattern of Chinese word formation is to have native material adapt to changed circumstances. The Chinese language neither borrows nor lends words, but it does occasionally borrow concepts. The larger cultural pattern in which this occurs is that the Chinese culture borrows, if necessary, but ensures that the act of borrowing does not…

  14. Flat-plate solar array project. Volume 5: Process development

    NASA Astrophysics Data System (ADS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-10-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  15. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  16. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  17. A systematic process for adaptive concept exploration

    NASA Astrophysics Data System (ADS)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  18. Electrowetting-based adaptive vari-focal liquid lens array for 3D display

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub

    2014-10-01

    Electrowetting is a phenomenon that can control the surface tension of liquid when a voltage is applied. This paper introduces the fabrication method of liquid lens array by the electrowetting phenomenon. The fabricated 23 by 23 lens array has 1mm diameter size with 1.6 mm interval distance between adjacent lenses. The diopter of each lens was - 24~27 operated at 0V to 50V. The lens array chamber fabricated by Deep Reactive-Ion Etching (DRIE) is deposited with IZO and parylene C and tantalum oxide. To prevent water penetration and achieve high dielectric constant, parylene C and tantalum oxide (ɛ = 23 ~ 25) are used respectively. Hydrophobic surface which enables the range of contact angle from 60 to 160 degree is coated to maximize the effect of electrowetting causing wide band of dioptric power. Liquid is injected into each lens chamber by two different ways. First way was self water-oil dosing that uses cosolvent and diffusion effect, while the second way was micro-syringe by the hydrophobic surface properties. To complete the whole process of the lens array fabrication, underwater sealing was performed using UV adhesive that does not dissolve in water. The transient time for changing from concave to convex lens was measured <33ms (at frequency of 1kHz AC voltage.). The liquid lens array was tested unprecedentedly for integral imaging to achieve more advanced depth information of 3D image.

  19. Electronic Processing And Advantages Of CMT Focal Plane Arrays

    NASA Astrophysics Data System (ADS)

    Murphy, Kevin S.; Dennis, Peter N.; Bradley, Derek J.

    1990-04-01

    There have been many advances in thermal imaging systems and components in recent years such that an infrared capability is now readily available and accepted in a variety of military and civilian applications. Conventional thermal imagers such as the UK common module imager use a mechanical scanning system to sweep a small array of detectors across the thermal scene to generate a high definition TV compatible output. Although excellent imagery can be obtained from this type of system, there are some inherent disadvantages, amongst which are the need for a high speed line scan mechanism and the fundamental limit in thermal resolution due to the low stare efficiency of the system. With the advent of two dimensional focal plane array detectors, staring array imagers can now be designed and constructed in which the scanning mechanism is removed. Excellent thermal resolution can be obtained from such imagers due to the relatively long stare times. The recent progress in this technology will be discussed in this paper together with a description of the signal processing requirements of this type of imaging system.

  20. ArrayPipe: a flexible processing pipeline for microarray data.

    PubMed

    Hokamp, Karsten; Roche, Fiona M; Acab, Michael; Rousseau, Marc-Etienne; Kuo, Byron; Goode, David; Aeschliman, Dana; Bryan, Jenny; Babiuk, Lorne A; Hancock, Robert E W; Brinkman, Fiona S L

    2004-07-01

    A number of microarray analysis software packages exist already; however, none combines the user-friendly features of a web-based interface with potential ability to analyse multiple arrays at once using flexible analysis steps. The ArrayPipe web server (freely available at www.pathogenomics.ca/arraypipe) allows the automated application of complex analyses to microarray data which can range from single slides to large data sets including replicates and dye-swaps. It handles output from most commonly used quantification software packages for dual-labelled arrays. Application features range from quality assessment of slides through various data visualizations to multi-step analyses including normalization, detection of differentially expressed genes, andcomparison and highlighting of gene lists. A highly customizable action set-up facilitates unrestricted arrangement of functions, which can be stored as action profiles. A unique combination of web-based and command-line functionality enables comfortable configuration of processes that can be repeatedly applied to large data sets in high throughput. The output consists of reports formatted as standard web pages and tab-delimited lists of calculated values that can be inserted into other analysis programs. Additional features, such as web-based spreadsheet functionality, auto-parallelization and password protection make this a powerful tool in microarray research for individuals and large groups alike. PMID:15215429

  1. Enhanced Processing for a Towed Array Using an Optimal Noise Canceling Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2005-07-21

    Noise self-generated by a surface ship towing an array in search of a weak target presents a major problem for the signal processing especially if broadband techniques are being employed. In this paper we discuss the development and application of an adaptive noise canceling processor capable of extracting the weak far-field acoustic target in a noisy ocean acoustic environment. The fundamental idea for this processor is to use a model-based approach incorporating both target and ship noise. Here we briefly describe the underlying theory and then demonstrate through simulation how effective the canceller and target enhancer perform. The adaptivity of the processor not only enables the ''tracking'' of the canceller coefficients, but also the estimation of target parameters for localization. This approach which is termed ''joint'' cancellation and enhancement produces the optimal estimate of both in a minimum (error) variance sense.

  2. Welding Process Feedback and Inspection Optimization Using Ultrasonic Phased Arrays

    NASA Astrophysics Data System (ADS)

    Hopkins, D. L.; Neau, G. N.; Davis, W. B.

    2009-03-01

    Measurements performed on friction-stir butt welds in aluminum and resistance spot welds in galvanized steel are used to illustrate how ultrasonic phased arrays can be used to provide high-resolution images of welds. Examples are presented that demonstrate how information extracted from the ultrasonic signals can be used to provide reliable feedback to welding processes. Modeling results are used to demonstrate how weld inspections can be optimized using beam-forming strategies that help overcome the influence of surface conditions and part distortion.

  3. Superconducting infrared detector arrays with integrated processing circuitry

    SciTech Connect

    Osterman, D.P.; Marr, P.; Dang, H.; Yao, C.T.; Radparvar, M. )

    1991-03-01

    This paper reports on thin film Josephson junctions used as infrared detectors' which function by a thermal sensing mechanism. In addition to the potential for high sensitivity to a broad range of optical wavelengths, they are ideally suited for integration with superconducting electronics on a single wafer. A project at HYPRES to develop these arrays is directed along two avenues: maximizing the sensitivity of individual Josephson junction detector/SQUID amplifier units and development of superconducting on-chip processing circuitry - multiplexers and A to D converters.

  4. Microbubble array for on-chip worm processing

    NASA Astrophysics Data System (ADS)

    Xu, Yuhao; Hashmi, Ali; Yu, Gan; Lu, Xiaonan; Kwon, Hyuck-Jin; Chen, Xiaolin; Xu, Jie

    2013-01-01

    We present an acoustic non-contact technique for achieving trapping, enrichment, and manipulation of Caenorhabditis elegans using an array of oscillating microbubbles. We characterize the trapping efficiency and enrichment ratio under various flow conditions, and demonstrate a single-worm manipulation mechanism through temporal actuation of bubbles. The reason for oscillating bubbles being versatile in processing worms in a microfluidic environment is due to the complex interactions among acoustic field, microbubbles, fluid flow, and live animals. We explain the operating mechanisms used in our device by the interplay among secondary acoustic radiation force, drag force, and the propulsive force of C. elegans.

  5. SAR processing with stepped chirps and phased array antennas.

    SciTech Connect

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  6. Solution processed semiconductor alloy nanowire arrays for optoelectronic applications

    NASA Astrophysics Data System (ADS)

    Shimpi, Paresh R.

    In this dissertation, we use ZnO nanowire as a model system to investigate the potential of solution routes for bandgap engineering in semiconductor nanowires. Excitingly, successful Mg-alloying into ZnO nanowire arrays has been achieved using a two-step sequential hydrothermal method at low temperature (<155°C) without using post-annealing process. Evidently, both room temperature and 40 K photoluminescence (PL) spectroscopy revealed enhanced and blue-shifted near-band-edge ultraviolet (NBE UV) emission in the Mg-alloyed ZnO (ZnMgO) nanowire arrays, compared with ZnO nanowires. The specific template of densely packed ZnO nanowires is found to be instrumental in achieving the Mg alloying in low temperature solution process. By optimizing the density of ZnO nanowires and precursor concentration, 8-10 at.% of Mg content has been achieved in ZnMgO nanowires. Post-annealing treatment is conducted in oxygen-rich and oxygen-deficient environment at different temperatures and time durations on silicon and quartz substrates in order to study the structural and optical property evolution in ZnMgO nanowire arrays. Vacuum annealed ZnMgO nanowires on both substrates retained their hexagonal structures and PL results showed the enhanced but red-shifted NBE UV emission compared to ZnO nanowires with visible emission nearly suppressed, suggesting the reduced defects concentration and improvement in crystallinity of the nanowires. On the contrast, for ambient annealed ZnMgO nanowires on silicon substrate, as the annealing temperature increased from 400°C to 900°C, intensity of visible emission peak across blue-green-yellow-red band (˜400-660 nm) increased whereas intensity of NBE UV peak decreased and completely got quenched. This might be due to interface diffusion of oxidized Si (SiOx) and formation of (Zn,Mg)1.7SiO4 epitaxially overcoated around individual ZnMgO nanowire. On the other hand, ambient annealed ZnMgO nanowires grown on quartz showed a ˜6-10 nm blue-shift in

  7. Signal processing of microbolometer infrared focal-plane arrays

    NASA Astrophysics Data System (ADS)

    Zhang, Junju; Qian, Yunsheng; Chang, Benkang; Xing, Suxia; Sun, Lianjun

    2005-01-01

    A 320×240-uncooled-microbolometer-based signal processing circuit for infrared focal-plane arrays is presented, and the software designs of this circuit system are also discussed in details. This signal processing circuit comprises such devices as FPGA, D/A, A/D, SRAM, Flash, DSP, etc., among which, FPGA is the crucial part, which realizing the generation of drive signals for infrared focal-plane, nonuniformity correction, image enhancement and video composition. The device of DSP, mainly offering auxiliary functions, carries out communication with PC and loads data when power-up. The phase locked loops (PLL) is used to generate high-quality clocks with low phase dithering and multiple clocks are to used satisfy the demands of focal-plane arrays, A/D, D/A and FPGA. The alternate structure is used to read or write SRAM in order to avoid the contradiction between different modules. FIFO embedded in FPGA not only makes full use of the resources of FPGA but acts as the channel between different modules which have different-speed clocks. What's more, working conditions, working process, physical design and management of the circuit are discussed. In software designing, all the function modules realized by FPGA and DSP devices, which are mentioned in the previous part, are discussed explicitly. Particularly to the nonuniformity correction module, the pipeline structure is designed to improve the working frequency and the ability to realize more complex algorithm.

  8. Adaptive Processes in Thalamus and Cortex Revealed by Silencing of Primary Visual Cortex during Contrast Adaptation.

    PubMed

    King, Jillian L; Lowe, Matthew P; Stover, Kurt R; Wong, Aimee A; Crowder, Nathan A

    2016-05-23

    Visual adaptation illusions indicate that our perception is influenced not only by the current stimulus but also by what we have seen in the recent past. Adaptation to stimulus contrast (the relative luminance created by edges or contours in a scene) induces the perception of the stimulus fading away and increases the contrast detection threshold in psychophysical tests [1, 2]. Neural correlates of contrast adaptation have been described throughout the visual system including the retina [3], dorsal lateral geniculate nucleus (dLGN) [4, 5], primary visual cortex (V1) [6], and parietal cortex [7]. The apparent ubiquity of adaptation at all stages raises the question of how this process cascades across brain regions [8]. Focusing on V1, adaptation could be inherited from pre-cortical stages, arise from synaptic depression at the thalamo-cortical synapse [9], or develop locally, but what is the weighting of these contributions? Because contrast adaptation in mouse V1 is similar to classical animal models [10, 11], we took advantage of the optogenetic tools available in mice to disentangle the processes contributing to adaptation in V1. We disrupted cortical adaptation by optogenetically silencing V1 and found that adaptation measured in V1 now resembled that observed in dLGN. Thus, the majority of adaptation seen in V1 neurons arises through local activity-dependent processes, with smaller contributions from dLGN inheritance and synaptic depression at the thalamo-cortical synapse. Furthermore, modeling indicates that divisive scaling of the weakly adapted dLGN input can predict some of the emerging features of V1 adaptation. PMID:27112300

  9. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  10. Adaptation to Work: An Exploration of Processes and Outcomes.

    ERIC Educational Resources Information Center

    Ashley, William L.; And Others

    A study of adaptation to work as both a process and an outcome was conducted. The study was conducted by personal interview that probed adaptation with respect to work's organizational, performance, interpersonal, responsibility, and affective aspects; and by questionnaire using the same aspects. The population studied consisted of persons without…

  11. Room geometry inference based on spherical microphone array eigenbeam processing.

    PubMed

    Mabande, Edwin; Kowalczyk, Konrad; Sun, Haohai; Kellermann, Walter

    2013-10-01

    The knowledge of parameters characterizing an acoustic environment, such as the geometric information about a room, can be used to enhance the performance of several audio applications. In this paper, a novel method for three-dimensional room geometry inference based on robust and high-resolution beamforming techniques for spherical microphone arrays is presented. Unlike other approaches that are based on the measurement and processing of multiple room impulse responses, here, microphone array signal processing techniques for uncontrolled broadband acoustic signals are applied. First, the directions of arrival (DOAs) and time differences of arrival (TDOAs) of the direct signal and room reflections are estimated using high-resolution robust broadband beamforming techniques and cross-correlation analysis. In this context, the main challenges include the low reflected-signal to background-noise power ratio, the low energy of reflected signals relative to the direct signal, and their strong correlation with the direct signal and among each other. Second, the DOA and TDOA information is combined to infer the room geometry using geometric relations. The high accuracy of the proposed room geometry inference technique is confirmed by experimental evaluations based on both simulated and measured data for moderately reverberant rooms. PMID:24116416

  12. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  13. Adaptive Array for Weak Interfering Signals: Geostationary Satellite Experiments. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Steadman, Karl

    1989-01-01

    The performance of an experimental adaptive array is evaluated using signals from an existing geostationary satellite interference environment. To do this, an earth station antenna was built to receive signals from various geostationary satellites. In these experiments the received signals have a frequency of approximately 4 GHz (C-band) and have a bandwidth of over 35 MHz. These signals are downconverted to a 69 MHz intermediate frequency in the experimental system. Using the downconverted signals, the performance of the experimental system for various signal scenarios is evaluated. In this situation, due to the inherent thermal noise, qualitative instead of quantitative test results are presented. It is shown that the experimental system can null up to two interfering signals well below the noise level. However, to avoid the cancellation of the desired signal, the use a steering vector is needed. Various methods to obtain an estimate of the steering vector are proposed.

  14. High-resolution optical coherence tomography using self-adaptive FFT and array detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yonghua; Chen, Zhongping; Xiang, Shaohua; Ding, Zhihua; Ren, Hongwu; Nelson, J. Stuart; Ranka, Jinendra K.; Windeler, Robert S.; Stentz, Andrew J.

    2001-05-01

    We developed a novel optical coherence tomographic (OCT) system which utilized broadband continuum generation for high axial resolution and a high numeric-aperture (N.A.) Objective for high lateral resolution (<5 micrometers ). The optimal focusing point was dynamically compensated during axial scanning so that it can be kept at the same position as the point that has an equal optical path length as that in the reference arm. This gives us uniform focusing size (<5 mum) at different depths. A new self-adaptive fast Fourier transform (FFT) algorithm was developed to digitally demodulate the interference fringes. The system employed a four-channel detector array for speckle reduction that significantly improved the image's signal-to-noise ratio.

  15. Dependence of magnetization process on thickness of Permalloy antidot arrays

    SciTech Connect

    Merazzo, K. J.; Real, R. P. del; Asenjo, A.; Vazquez, M.

    2011-04-01

    Nanohole films or antidot arrays of Permalloy have been prepared by the sputtering of Ni{sub 80}Fe{sub 20} onto anodic alumina membrane templates. The film thickness varies from 5 to 47 nm and the antidot diameters go from 42 to 61 nm, for a hexagonal lattice parameter of 105 nm. For the thinner antidot films (5 and 10 nm thick), magnetic moments locally distribute in a complex manner to reduce the magnetostatic energy, and their mostly reversible magnetization process is ascribed to spin rotations. In the case of the thicker (20 and 47 nm) antidot films, pseudodomain walls appear and the magnetization process is mostly irreversible where hysteresis denotes the effect of nanoholes pinning to wall motion.

  16. Adaptive Memory: Is Survival Processing Special?

    ERIC Educational Resources Information Center

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  17. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data. PMID:23938797

  18. Analysis of Wide-Band Signals Using Wavelet Array Processing

    NASA Astrophysics Data System (ADS)

    Nisii, V.; Saccorotti, G.

    2005-12-01

    Wavelets transforms allow for precise time-frequency localization in the analysis of non-stationary signals. In wavelet analysis the trade-off between frequency bandwidth and time duration, also known as Heisenberg inequality, is by-passed using a fully scalable modulated window which solves the signal-cutting problem of Windowed Fourier Transform. We propose a new seismic array data processing procedure capable of displaying the localized spatial coherence of the signal in both the time- and frequency-domain, in turn deriving the propagation parameters of the most coherent signals crossing the array. The procedure consists in: a) Wavelet coherence analysis for each station pair of the instruments array, aimed at retrieving the frequency- and time-localisation of coherent signals. To this purpose, we use the normalised wavelet cross- power spectrum, smoothed along the time and scale domains. We calculate different coherence spectra adopting smoothing windows of increasing lengths; a final, robust estimate of the time-frequency localisation of spatially-coherent signals is eventually retrieved from the stack of the individual coherence distribution. This step allows for a quick and reliable signal discrimination: wave groups propagating across the network will manifest as high-coherence patches spanning the corresponding time-scale region. b) Once the signals have been localised in the time and frequency domain,their propagation parameters are estimated using a modified MUSIC (MUltiple SIgnal Characterization) algorithm. We select the MUSIC approach as it demonstrated superior performances in the case of low SNR signals, more plane waves contemporaneously impinging at the array and closely separated sources. The narrow-band Coherent Signal Subspace technique is applied to the complex Continuous Wavelet Transform of multichannel data for improving the singularity of the estimated cross-covariance matrix and the accuracy of the estimated signal eigenvectors. Using

  19. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  20. Metabolic Adaptation Processes That Converge to Optimal Biomass Flux Distributions

    PubMed Central

    Altafini, Claudio; Facchetti, Giuseppe

    2015-01-01

    In simple organisms like E.coli, the metabolic response to an external perturbation passes through a transient phase in which the activation of a number of latent pathways can guarantee survival at the expenses of growth. Growth is gradually recovered as the organism adapts to the new condition. This adaptation can be modeled as a process of repeated metabolic adjustments obtained through the resilencings of the non-essential metabolic reactions, using growth rate as selection probability for the phenotypes obtained. The resulting metabolic adaptation process tends naturally to steer the metabolic fluxes towards high growth phenotypes. Quite remarkably, when applied to the central carbon metabolism of E.coli, it follows that nearly all flux distributions converge to the flux vector representing optimal growth, i.e., the solution of the biomass optimization problem turns out to be the dominant attractor of the metabolic adaptation process. PMID:26340476

  1. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  2. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  3. Lithography process of micropore array pattern in Si microchannel plates

    NASA Astrophysics Data System (ADS)

    Fan, Linlin; Han, Jun; Liu, Huan; Wang, Yawei

    2015-02-01

    Microchannel plates - MCPs - are the key component of the image intensifier. Compared with the traditional MCPs, the Si MCPs which are fabricated by micro-nanofabrication technologies have a high gain, low noise and high resolution etc. In this paper, the lithography process is studied in the process of fabricating periodic micropore array with 10 um pores and 5 um pitch on Si. The effects of exposure time, reversal bake temperature and development time on the lithography quality are focused. By doing a series of experiments the better result is got: the photoresist film is obtained at a low speed 500/15(rpm/s) and a high speed 4500/50(rpm/s); the soft bake time is 10min at 100°; the exposure time is 10s; the reversal bake time is 80s at 115°; the development time is 55s. By microscope observation and measurement, the pattern is complete and the size of the pattern is accure, it meets the requirement of lithography process for fabricating Si-MCP.

  4. Adaptive constructive processes and the future of memory

    PubMed Central

    Schacter, Daniel L.

    2013-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PMID:23163437

  5. ADAPT: A knowledge-based synthesis tool for digital signal processing system design

    SciTech Connect

    Cooley, E.S.

    1988-01-01

    A computer aided synthesis tool for expansion, compression, and filtration of digital images is described. ADAPT, the Autonomous Digital Array Programming Tool, uses an extensive design knowledge base to synthesize a digital signal processing (DSP) system. Input to ADAPT can be either a behavioral description in English, or a block level specification via Petri Nets. The output from ADAPT comprises code to implement the DSP system on an array of processors. ADAPT is constructed using C, Prolog, and X Windows on a SUN 3/280 workstation. ADAPT knowledge encompasses DSP component information and the design algorithms and heuristics of a competent DSP designer. The knowledge is used to form queries for design capture, to generate design constraints from the user's responses, and to examine the design constraints. These constraints direct the search for possible DSP components and target architectures. Constraints are also used for partitioning the target systems into less complex subsystems. The subsystems correspond to architectural building blocks of the DSP design. These subsystems inherit design constraints and DSP characteristics from their parent blocks. Thus, a DSP subsystem or parent block, as designed by ADAPT, must meet the user's design constraints. Design solutions are sought by searching the Components section of the design knowledge base. Component behavior which matches or is similar to that required by the DSP subsystems is sought. Each match, which corresponds to a design alternative, is evaluated in terms of its behavior. When a design is sufficiently close to the behavior required by the user, detailed mathematical simulations may be performed to accurately determine exact behavior.

  6. Dynamic analysis of neural encoding by point process adaptive filtering.

    PubMed

    Eden, Uri T; Frank, Loren M; Barbieri, Riccardo; Solo, Victor; Brown, Emery N

    2004-05-01

    Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new filters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the receptive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale. PMID:15070506

  7. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  8. Study of the ICP etching process on InGaAs/InP array devices

    NASA Astrophysics Data System (ADS)

    Niu, Xiaochen; Deng, Jun; Shi, Yanli; Tian, Ying; Zou, Deshu

    2014-11-01

    It was very different between the etching rate of large patterns and narrow grooves on InGaAs/InP materials by inductively coupled plasma (ICP) technology. With the aim of high etching rate, good morphology, smooth interfaces and fewer defects, the etching mechanisms of ICP via changing gas flow rate, chamber pressure and RF power have been analyzed. Some recipes have been found to achieve a narrow stripe and deep groove with good uniformity, interface and morphology via high etching rate and good selectivity. The different phenomena during etching the large patterns and narrow grooves have been explained and the sets of parameters have been summarized that is adapted to the array device on InGaAs/InP materials during the ICP process.

  9. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    NASA Astrophysics Data System (ADS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Desheng Meng, Dennis

    2011-08-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA.

  10. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  11. Prism adaptation in virtual and natural contexts: Evidence for a flexible adaptive process.

    PubMed

    Veilleux, Louis-Nicolas; Proteau, Luc

    2015-01-01

    Prism exposure when aiming at a visual target in a virtual condition (e.g., when the hand is represented by a video representation) produces no or only small adaptations (after-effects), whereas prism exposure in a natural condition produces large after-effects. Some researchers suggested that this difference may arise from distinct adaptive processes, but other studies suggested a unique process. The present study reconciled these conflicting interpretations. Forty participants were divided into two groups: One group used visual feedback of their hand (natural context), and the other group used computer-generated representational feedback (virtual context). Visual feedback during adaptation was concurrent or terminal. All participants underwent laterally displacing prism perturbation. The results showed that the after-effects were twice as large in the "natural context" than in the "virtual context". No significant differences were observed between the concurrent and terminal feedback conditions. The after-effects generalized to untested targets and workspace. These results suggest that prism adaptation in virtual and natural contexts involves the same process. The smaller after-effects in the virtual context suggest that the depth of adaptation is a function of the degree of convergence between the proprioceptive and visual information that arises from the hand. PMID:25338188

  12. Damage Detection in Composite Structures with Wavenumber Array Data Processing

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Guided ultrasonic waves (GUW) have the potential to be an efficient and cost-effective method for rapid damage detection and quantification of large structures. Attractive features include sensitivity to a variety of damage types and the capability of traveling relatively long distances. They have proven to be an efficient approach for crack detection and localization in isotropic materials. However, techniques must be pushed beyond isotropic materials in order to be valid for composite aircraft components. This paper presents our study on GUW propagation and interaction with delamination damage in composite structures using wavenumber array data processing, together with advanced wave propagation simulations. Parallel elastodynamic finite integration technique (EFIT) is used for the example simulations. Multi-dimensional Fourier transform is used to convert time-space wavefield data into frequency-wavenumber domain. Wave propagation in the wavenumber-frequency domain shows clear distinction among the guided wave modes that are present. This allows for extracting a guided wave mode through filtering and reconstruction techniques. Presence of delamination causes spectral change accordingly. Results from 3D CFRP guided wave simulations with delamination damage in flat-plate specimens are used for wave interaction with structural defect study.

  13. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  14. A bit-serial VLSI array processing chip for image processing

    NASA Technical Reports Server (NTRS)

    Heaton, Robert; Blevins, Donald; Davis, Edward

    1990-01-01

    An array processing chip integrating 128 bit-serial processing elements (PEs) on a single die is discussed. Each PE has a 16-function logic unit, a single-bit adder, a 32-b variable-length shift register, and 1 kb of local RAM. Logic in each PE provides the capability to mask PEs individually. A modified grid interconnection scheme allows each PE to communicate with each of its eight nearest neighbors. A 32-b bus is used to transfer data to and from the array in a single cycle. Instruction execution is pipelined, enabling all instructions to be executed in a single cycle. The 1-micron CMOS design contains over 1.1 x 10 to the 6th transistors on an 11.0 x 11.7-mm die.

  15. A High-Speed Adaptively-Biased Current-to-Current Front-End for SSPM Arrays

    NASA Astrophysics Data System (ADS)

    Zheng, Bob; Walder, Jean-Pierre; Lippe, Henrik vonder; Moses, William; Janecek, Martin

    Solid-state photomultiplier (SSPM) arrays are an interesting technology for use in PET detector modules due to their low cost, high compactness, insensitivity to magnetic fields, and sub-nanosecond timing resolution. However, the large intrinsic capacitance of SSPM arrays results in RC time constants that can severely degrade the response time, which leads to a trade-off between array size and speed. Instead, we propose a front-end that utilizes an adaptively biased current-to-current converter that minimizes the resistance seen by the SSPM array, thus preserving the timing resolution for both large and small arrays. This enables the use of large SSPM arrays with resistive networks, which creates position information and minimizes the number of outputs for compatibility with general PET multiplexing schemes. By tuning the bias of the feedback amplifier, the chip allows for precise control of the close-loop gain, ensuring stability and fast operation from loads as small as 50pF to loads as large as 1nF. The chip has 16 input channels, and 4 outputs capable of driving 100 n loads. The power consumption is 12mW per channel and 360mW for the entire chip. The chip has been designed and fabricated in an AMS 0.35um high-voltage technology, and demonstrates a fast rise-time response and low noise performances.

  16. Behavioral training promotes multiple adaptive processes following acute hearing loss

    PubMed Central

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-01-01

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI: http://dx.doi.org/10.7554/eLife.12264.001 PMID:27008181

  17. Assessing the Process of Marital Adaptation: The Marital Coping Inventory.

    ERIC Educational Resources Information Center

    Zborowski, Lydia L.; Berman, William H.

    Studies on coping with life events identify marriage as a distinct situational stressor, in which a wide range of coping strategies specific to the marital relationship are employed. This study examined the process of martial adaptation, identified as a style of coping, in 116 married volunteers. Subjects completed a demographic questionnaire, the…

  18. Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array

    PubMed Central

    Ustun, Teoman E.; Iftimia, Nicusor V.; Ferguson, R. Daniel; Hammer, Daniel X.

    2008-01-01

    Real-time display of processed Fourier domain optical coherence tomography (FDOCT) images is important for applications that require instant feedback of image information, for example, systems developed for rapid screening or image-guided surgery. However, the computational requirements for high-speed FDOCT image processing usually exceeds the capabilities of most computers and therefore display rates rarely match acquisition rates for most devices. We have designed and developed an image processing system, including hardware based upon a field programmable gated array, firmware, and software that enables real-time display of processed images at rapid line rates. The system was designed to be extremely flexible and inserted in-line between any FDOCT detector and any Camera Link frame grabber. Two versions were developed for spectrometer-based and swept source-based FDOCT systems, the latter having an additional custom high-speed digitizer on the front end but using all the capabilities and features of the former. The system was tested in humans and monkeys using an adaptive optics retinal imager, in zebrafish using a dual-beam Doppler instrument, and in human tissue using a swept source microscope. A display frame rate of 27 fps for fully processed FDOCT images (1024 axial pixels×512 lateral A-scans) was achieved in the spectrometer-based systems. PMID:19045902

  19. Adaptive control of surface finish in automated turning processes

    NASA Astrophysics Data System (ADS)

    García-Plaza, E.; Núñez, P. J.; Martín, A. R.; Sanz, A.

    2012-04-01

    The primary aim of this study was to design and develop an on-line control system of finished surfaces in automated machining process by CNC turning. The control system consisted of two basic phases: during the first phase, surface roughness was monitored through cutting force signals; the second phase involved a closed-loop adaptive control system based on data obtained during the monitoring of the cutting process. The system ensures that surfaces roughness is maintained at optimum values by adjusting the feed rate through communication with the PLC of the CNC machine. A monitoring and adaptive control system has been developed that enables the real-time monitoring of surface roughness during CNC turning operations. The system detects and prevents faults in automated turning processes, and applies corrective measures during the cutting process that raise quality and reliability reducing the need for quality control.

  20. Adaption of the Magnetometer Towed Array geophysical system to meet Department of Energy needs for hazardous waste site characterization

    SciTech Connect

    Cochran, J.R.; McDonald, J.R.; Russell, R.J.; Robertson, R.; Hensel, E.

    1995-10-01

    This report documents US Department of Energy (DOE)-funded activities that have adapted the US Navy`s Surface Towed Ordnance Locator System (STOLS) to meet DOE needs for a ``... better, faster, safer and cheaper ...`` system for characterizing inactive hazardous waste sites. These activities were undertaken by Sandia National Laboratories (Sandia), the Naval Research Laboratory, Geo-Centers Inc., New Mexico State University and others under the title of the Magnetometer Towed Array (MTA).

  1. Two subroutines used in processing of arrayed data files

    NASA Astrophysics Data System (ADS)

    Wu, Guang-Jie

    Arrayed data files are commonly used in astronomy. It may be a text file compiled by the software "EDIT" in common use, or a table compiled by Microsoft WORD, Excel, or a FITS format etc. In the database of CDS (Centre de Données astronomiques de Strasbourg), there are over thousands star catalogues. Sometimes you may get a star catalogue from a colleague or friend of you, which may be done by multivarious computer software and may have peculiarity of sorts. Especially, the star catalogue had been compiled several years ago. You may often need to deal with such listed multidimensional data files, and you may need to make new listed data files by yourself. This processing for reduce-dimension or add-dimension, if it was a kind of row treatment, is very easy to do with some famous software like "EDIT". However, maybe you are facing a column treatment. It may bring some trouble to you. In some cases, a character "Tab" may exist in the file. Different software, even different printers made by a certain company, may give dissimilar treatment to the character "Tab". The problem is that a Table-key can denote a single space-key, or can be up to eight space-keys. Sometimes, it may not be easy to find a ready-made program in your hands. If this data file could be opened by the software "EDIT", two programs in this paper can help you to understand what happened there, and help you to solve the problem conveniently and easily. It includes to convert all of the Table-keys to be corresponding space-keys, to pick-up, delete, add blanks, or link two data files as two columns in one file.

  2. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  3. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  4. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  5. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  6. Epidemic processes over adaptive state-dependent networks

    NASA Astrophysics Data System (ADS)

    Ogura, Masaki; Preciado, Victor M.

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.

  7. Epidemic processes over adaptive state-dependent networks.

    PubMed

    Ogura, Masaki; Preciado, Victor M

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures. PMID:27415289

  8. Structure and Process of Infrared Hot Electron Transistor Arrays

    PubMed Central

    Fu, Richard

    2012-01-01

    An infrared hot-electron transistor (IHET) 5 × 8 array with a common base configuration that allows two-terminal readout integration was investigated and fabricated for the first time. The IHET structure provides a maximum factor of six in improvement in the photocurrent to dark current ratio compared to the basic quantum well infrared photodetector (QWIP), and hence it improved the array S/N ratio by the same factor. The study also showed for the first time that there is no electrical cross-talk among individual detectors, even though they share the same emitter and base contacts. Thus, the IHET structure is compatible with existing electronic readout circuits for photoconductors in producing sensitive focal plane arrays. PMID:22778655

  9. Thermodynamic Costs of Information Processing in Sensory Adaptation

    PubMed Central

    Sartori, Pablo; Granger, Léo; Lee, Chiu Fan; Horowitz, Jordan M.

    2014-01-01

    Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response. PMID:25503948

  10. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    PubMed

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  11. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  12. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  13. Dimpled Ball Grid Array process development for space flight applications

    NASA Technical Reports Server (NTRS)

    Barr, S. L.; Mehta, A.

    2000-01-01

    The 472 Dimpled Ball Grid Array (D-BGA) package has not been used in past space flight environments, therefore it is necessary to determine the robustness and reliability of the solder joints. The 472 D-BGA packages passed the above environmental tests within the specifications and are now qualified for use on space flight electronics.

  14. Orbital Processing of Eutectic Rod-Like Arrays

    NASA Technical Reports Server (NTRS)

    Larson, David J., Jr.

    1998-01-01

    The eutectic is one of only three solidification classes that exist. The others are isostructural and peritectic-class reactions, respectively. Simplistically, in a binaryeutectic phase diagram, a single liquid phase isothermally decomposes to two solid phases in a cooperative manner. The melting point minimum at the eutectic composition, isothermal solidification temperature, near-isocompositional solidification and refined solidification microstructure lend themselves naturally to such applications as brazing and soldering; industries that eutectic alloys dominate. Interest in direct process control of microstructures has led, more recently, to in-situ eutectic directional solidification with applications in electro-magnetics and electro-optics. In these cases, controlled structural refinement and the high aspect ratio and regularity of the distributed eutectic phases is highly significant to the fabrication and application of these in-situ natural composites. The natural pattern formation and scaling of the dispersed phase on a sub-micron scale has enormous potential application, since fabricating bulk materials on this scale mechanically has proven to be particularly difficult. It is thus of obvious importance to understand the solidification of eutectic materials since they are of great commercial significance. The dominant theory that describes eutectic solidification was derived for diffusion-controlled growth of alloys where both solid eutectic phases solidify metallically, i.e. without faceting at the solidification interface. Both high volume fraction (lamellar) and low volume fraction (rod-like) regular metallic arrays are treated by this theory. Many of the useful solders and brazements, however, and most of the regular in-situ composites are characterized by solidification reactions that are faceted/non-faceted in nature, rather than doubly non-faceted (metallic). Further, diffusion-controlled growth conditions are atypical terrestrially since

  15. Adapting physically complete models to vehicle-based EMI array sensor data: data inversion and discrimination studies

    NASA Astrophysics Data System (ADS)

    Shubitidze, Fridon; Miller, Jonathan S.; Schultz, Gregory M.; Marble, Jay A.

    2010-04-01

    This paper reports vehicle based electromagnetic induction (EMI) array sensor data inversion and discrimination results. Recent field studies show that EMI arrays, such as the Minelab Single Transmitter Multiple Receiver (STMR), and the Geophex GEM-5 EMI array, provide a fast and safe way to detect subsurface metallic targets such as landmines, unexploded ordnance (UXO) and buried explosives. The array sensors are flexible and easily adaptable for a variety of ground vehicles and mobile platforms, which makes them very attractive for safe and cost effective detection operations in many applications, including but not limited to explosive ordnance disposal and humanitarian UXO and demining missions. Most state-of-the-art EMI arrays measure the vertical or full vector field, or gradient tensor fields and utilize them for real-time threat detection based on threshold analysis. Real field practice shows that the threshold-level detection has high false alarms. One way to reduce these false alarms is to use EMI numerical techniques that are capable of inverting EMI array data in real time. In this work a physically complete model, known as the normalized volume/surface magnetic sources (NV/SMS) model is adapted to the vehicle-based EMI array, such as STMR and GEM-5, data. The NV/SMS model can be considered as a generalized volume or surface dipole model, which in a special limited case coincides with an infinitesimal dipole model approach. According to the NV/SMS model, an object's response to a sensor's primary field is modeled mathematically by a set of equivalent magnetic dipoles, distributed inside the object (i.e. NVMS) or over a surface surrounding the object (i.e. NSMS). The scattered magnetic field of the NSMS is identical to that produced by a set of interacting magnetic dipoles. The amplitudes of the magnetic dipoles are normalized to the primary magnetic field, relating induced magnetic dipole polarizability and the primary magnetic field. The magnitudes of

  16. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  17. Adoption: biological and social processes linked to adaptation.

    PubMed

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood. PMID:24016275

  18. Acoustic analysis by spherical microphone array processing of room impulse responses.

    PubMed

    Khaykin, Dima; Rafaely, Boaz

    2012-07-01

    Spherical microphone arrays have been recently used for room acoustics analysis, to detect the direction-of-arrival of early room reflections, and compute directional room impulse responses and other spatial room acoustics parameters. Previous works presented methods for room acoustics analysis using spherical arrays that are based on beamforming, e.g., delay-and-sum, regular beamforming, and Dolph-Chebyshev beamforming. Although beamforming methods provide useful directional selectivity, optimal array processing methods can provide enhanced performance. However, these algorithms require an array cross-spectrum matrix with a full rank, while array data based on room impulse responses may not satisfy this condition due to the single frame data. This paper presents a smoothing technique for the cross-spectrum matrix in the frequency domain, designed for spherical microphone arrays, that can solve the problem of low rank when using room impulse response data, therefore facilitating the use of optimal array processing methods. Frequency smoothing is shown to be performed effectively using spherical arrays, due to the decoupling of frequency and angular components in the spherical harmonics domain. Experimental study with data measured in a real auditorium illustrates the performance of optimal array processing methods such as MUSIC and MVDR compared to beamforming. PMID:22779475

  19. On the design of systolic-array architectures with applications to signal processing

    SciTech Connect

    Niamat, M.Y.

    1989-01-01

    Systolic arrays are networks of processors that rhythmically compute and paw data through systems. These arrays feature the important properties of modularity, regularity, local interconnections, and a high degree of pipelining and multiprocessing. In this dissertation, several systolic arrays are proposed with applications to real-time signal processing. Specifically, these arrays are designed for the rapid computation of position velocities, accelerations, and jerks associated with motion. Real-time computations of these parameters arise in many applications, notably in the areas of robotics, image-processing, remote signal processing, and computer-controlled machines. The systolic arrays proposed in this dissertation can be classified into the linear, the triangular, and the mesh connected types. In the linear category, six different systolic designs are presented. The relative merits of these designs are discussed in detail. It is found from the analysis of these designs that each of these arrays achieves a proportional increase in time. Also, by interleaving the input data items in some of these designs, the throughput rate is further doubled. This also increases the processor utilization rate to 100%. The triangular type systolic array is found to be useful when all three parameters are to be computed simultaneously, and the mesh type, when the number of signals to be processed are extremely large. The effect of direct broadcasting of data to the processing cells is also investigated. Finally, the utility of the proposed systolic arrays is illustrated by a practical design example.

  20. Addressing the need for adaptable decision processes within healthcare software.

    PubMed

    Miseldine, P; Taleb-Bendiab, A; England, D; Randles, M

    2007-03-01

    In the healthcare sector, where the decisions made by software aid in the direct treatment of patients, software requires high levels of assurance to ensure the correct interpretation of the tasks it is automating. This paper argues that introducing adaptable decision processes within eHealthcare initiatives can reduce software-maintenance complexity and, due to the instantaneous, distributed deployment of decision models, allow for quicker updates of current best practice, thereby improving patient care. The paper provides a description of a collection of technologies and tools that can be used to provide the required adaptation in a decision process. These tools are evaluated against two case studies that individually highlight different requirements in eHealthcare: a breast-cancer decision-support system, in partnership with several of the UK's leading cancer hospitals, and a dental triage in partnership with the Royal Liverpool Hospital which both show how the complete process flow of software can be abstracted and adapted, and the benefits that arise as a result. PMID:17365643

  1. Algorithms and architectures for adaptive least squares signal processing, with applications in magnetoencephalography

    SciTech Connect

    Lewis, P.S.

    1988-10-01

    Least squares techniques are widely used in adaptive signal processing. While algorithms based on least squares are robust and offer rapid convergence properties, they also tend to be complex and computationally intensive. To enable the use of least squares techniques in real-time applications, it is necessary to develop adaptive algorithms that are efficient and numerically stable, and can be readily implemented in hardware. The first part of this work presents a uniform development of general recursive least squares (RLS) algorithms, and multichannel least squares lattice (LSL) algorithms. RLS algorithms are developed for both direct estimators, in which a desired signal is present, and for mixed estimators, in which no desired signal is available, but the signal-to-data cross-correlation is known. In the second part of this work, new and more flexible techniques of mapping algorithms to array architectures are presented. These techniques, based on the synthesis and manipulation of locally recursive algorithms (LRAs), have evolved from existing data dependence graph-based approaches, but offer the increased flexibility needed to deal with the structural complexities of the RLS and LSL algorithms. Using these techniques, various array architectures are developed for each of the RLS and LSL algorithms and the associated space/time tradeoffs presented. In the final part of this work, the application of these algorithms is demonstrated by their employment in the enhancement of single-trial auditory evoked responses in magnetoencephalography. 118 refs., 49 figs., 36 tabs.

  2. Signal processing and compensation electronics for junction field-effect transistor /JFET/ focal plane arrays

    NASA Astrophysics Data System (ADS)

    Wittig, K. R.

    1982-06-01

    A signal processing system has been designed and constructed for a pyroelectric infrared area detector which uses a matrix-addressable JFET array for readout and for on-focal plane preamplification. The system compensates for all offset and gain nonuniformities in and after the array. Both compensations are performed in real time at standard television rates, so that changes in the response characteristics of the array are automatically corrected for. Two-point compensation is achieved without the need for two separate temperature references. The focal plane circuitry used to read out the array, the offset and gain compensation algorithms, the architecture of the signal processor, and the system hardware are described.

  3. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  4. Batch-processed close-track array heads (abstract)

    NASA Astrophysics Data System (ADS)

    Tang, D. D.; Santini, H.; Lee, R. E.; Ju, K.; Krounbi, M.

    1997-04-01

    This article describes novel array heads for close packed track recording. The heads are batch fabricated on wafers in a linear fashion (Fig. 1). These 60-turn thin-film inductive heads are designed with 6 μm pitch helical coils and planar side-by-side P1/G/P2 yoke structures. The linear head array is placed along the upstream-to-downstream direction of the track. By skewing the array slightly off the track direction, each head of the array aligns to an individual track (Fig. 2). In this case, the track pitch is about 5 μm, which is the yoke height. With this head arrangement, even though thermal expansion causes the head-to-head distance to increase along the upstream-downstream direction, it does not cause a thermally induced track misregistration problem. The increased head-to-head distance only affects the timing of signals between tracks, which can be compensated by the channel electronics. Thus, the thermally induced track misregistration problem is eliminated using this design. The guardbands between tracks are not necessary and a close-packed track recording is possible. A state of the art head impedance of the 60-turn head is obtained: 11 Ω and 0.40 μH. The gap-to-gap pitch is 100 μm. The overall head-to-head isolation is greater than 50 dB at 10 MHz. Such a large isolation is realized by suppressing the capacitive coupling between lead wires using a ground plane and grounded wall structures. The tight winding of the helical coils reduces the magnetic coupling between the heads.

  5. Studying the star formation process with adaptive optics

    NASA Astrophysics Data System (ADS)

    Menard, Francois; Dougados, Catherine; Duchene, Gaspard; Bouvier, Jerome; Duvert, Gilles; Lavalley, Claudia; Monin, Jean-Louis; Beuzit, Jean-Luc

    2000-07-01

    Young Stellar Objects (YSOs) are the builders of worlds. During its infancy, a star transforms ordinary interstellar dust particles into astronomical gold: planets to say the process is complex, and largely unknown to data. Yet, violent and spectacular events of mass ejection are witnessed, disks in keplerian rotation are detected, multiple stars dancing around each other are found. These are as many traces of the stellar and planet formation process. The high angular resolution provided by adaptive optics, and the related gain in sensitivity, have allowed major breakthrough discoveries to be made in each of these specific fields and our understanding of the various physical processes involved in the formation of a star has leaped forward tremendously over the last few years. In the following, meant as a report of the progress made recently in star formation due to adaptive optics, we will describe new results obtained at optical and near- infrared wavelengths, in imaging and spectroscopic modes. Our images of accretion disks and ionized stellar jets permit direct measurements of many physical parameters and shed light into the physics of the accretion and ejection processes. Although the accretion/ejection process so fundamental to star formation is usually studied around single objects, most of young stars form as part of multiple systems. We also present our findings on how the fraction of stars in binary systems evolves with age. The implications of these results on the conditions under which these stars must have formed are discussed.

  6. Adaptive smart simulator for characterization and MPPT construction of PV array

    NASA Astrophysics Data System (ADS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  7. Automatic ultrasonic imaging system with adaptive-learning-network signal-processing techniques

    SciTech Connect

    O'Brien, L.J.; Aravanis, N.A.; Gouge, J.R. Jr.; Mucciardi, A.N.; Lemon, D.K.; Skorpik, J.R.

    1982-04-01

    A conventional pulse-echo imaging system has been modified to operate with a linear ultrasonic array and associated digital electronics to collect data from a series of defects fabricated in aircraft quality steel blocks. A thorough analysis of the defect responses recorded with this modified system has shown that considerable improvements over conventional imaging approaches can be obtained in the crucial areas of defect detection and characterization. A combination of advanced signal processing concepts with the Adaptive Learning Network (ALN) methodology forms the basis for these improvements. Use of established signal processing algorithms such as temporal and spatial beam-forming in concert with a sophisticated detector has provided a reliable defect detection scheme which can be implemented in a microprocessor-based system to operate in an automatic mode.

  8. Steerable Space Fed Lens Array for Low-Cost Adaptive Ground Station Applications

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Popovic, Zoya; Rondineau, Sebastien; Miranda, Felix A.

    2007-01-01

    The Space Fed Lens Array (SFLA) is an alternative to a phased array antenna that replaces large numbers of expensive solid-state phase shifters with a single spatial feed network. SFLA can be used for multi-beam application where multiple independent beams can be generated simultaneously with a single antenna aperture. Unlike phased array antennas where feed loss increases with array size, feed loss in a lens array with more than 50 elements is nearly independent of the number of elements, a desirable feature for large apertures. In addition, SFLA has lower cost as compared to a phased array at the expense of total volume and complete beam continuity. For ground station applications, both of these tradeoff parameters are not important and can thus be exploited in order to lower the cost of the ground station. In this paper, we report the development and demonstration of a 952-element beam-steerable SFLA intended for use as a low cost ground station for communicating and tracking of a low Earth orbiting satellite. The dynamic beam steering is achieved through switching to different feed-positions of the SFLA via a beam controller.

  9. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  10. Adaptive neural information processing with dynamical electrical synapses

    PubMed Central

    Xiao, Lei; Zhang, Dan-ke; Li, Yuan-qing; Liang, Pei-ji; Wu, Si

    2013-01-01

    The present study investigates a potential computational role of dynamical electrical synapses in neural information process. Compared with chemical synapses, electrical synapses are more efficient in modulating the concerted activity of neurons. Based on the experimental data, we propose a phenomenological model for short-term facilitation of electrical synapses. The model satisfactorily reproduces the phenomenon that the neuronal correlation increases although the neuronal firing rates attenuate during the luminance adaptation. We explore how the stimulus information is encoded in parallel by firing rates and correlated activity of neurons, and find that dynamical electrical synapses mediate a transition from the firing rate code to the correlation one during the luminance adaptation. The latter encodes the stimulus information by using the concerted, but lower neuronal firing rate, and hence is economically more efficient. PMID:23596413

  11. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  12. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  13. A solar array module fabrication process for HALE solar electric UAVs

    SciTech Connect

    Carey, P.G.; Aceves, R.C.; Colella, N.J.; Thompson, J.B.; Williams, K.A.

    1993-12-01

    We describe a fabrication process to manufacture high power to weight ratio flexible solar array modules for use on high altitude long endurance (HALE) solar electric unmanned air vehicles (UAVs). A span-loaded flying wing vehicle, known as the RAPTOR Pathfinder, is being employed as a flying test bed to expand the envelope of solar powered flight to high altitudes. It requires multiple light weight flexible solar array modules able to endure adverse environmental conditions. At high altitudes the solar UV flux is significantly enhanced relative to sea level, and extreme thermal variations occur. Our process involves first electrically interconnecting solar cells into an array followed by laminating them between top and bottom laminated layers into a solar array module. After careful evaluation of candidate polymers, fluoropolymer materials have been selected as the array laminate layers because of their inherent abilities to withstand the hostile conditions imposed by the environment.

  14. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  15. Design, processing and testing of LSI arrays, hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.

    1979-01-01

    Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.

  16. Experimental results for a photonic time reversal processor for the adaptive control of an ultra wideband phased array antenna

    NASA Astrophysics Data System (ADS)

    Zmuda, Henry; Fanto, Michael; McEwen, Thomas

    2008-04-01

    This paper describes a new concept for a photonic implementation of a time reversed RF antenna array beamforming system. The process does not require analog to digital conversion to implement and is therefore particularly suited for high bandwidth applications. Significantly, propagation distortion due to atmospheric effects, clutter, etc. is automatically accounted for with the time reversal process. The approach utilizes the reflection of an initial interrogation signal from off an extended target to precisely time match the radiating elements of the array so as to re-radiate signals precisely back to the target's location. The backscattered signal(s) from the desired location is captured by each antenna and used to modulate a pulsed laser. An electrooptic switch acts as a time gate to eliminate any unwanted signals such as those reflected from other targets whose range is different from that of the desired location resulting in a spatial null at that location. A chromatic dispersion processor is used to extract the exact array parameters of the received signal location. Hence, other than an approximate knowledge of the steering direction needed only to approximately establish the time gating, no knowledge of the target position is required, and hence no knowledge of the array element time delay is required. Target motion and/or array element jitter is automatically accounted for. Presented here are experimental results that demonstrate the ability of a photonic processor to perform the time-reversal operation on ultra-short electronic pulses.

  17. On Cognition, Structured Sequence Processing, and Adaptive Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Petersson, Karl Magnus

    2008-11-01

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.

  18. Adaptive ocean acoustic processing for a shallow ocean experiment

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1995-07-19

    A model-based approach is developed to solve an adaptive ocean acoustic signal processing problem. Here we investigate the design of model-based identifier (MBID) for a normal-mode model developed from a shallow water ocean experiment and then apply it to a set of experimental data demonstrating the feasibility of this approach. In this problem we show how the processor can be structured to estimate the horizontal wave numbers directly from measured pressure sound speed thereby eliminating the need for synthetic aperture processing or a propagation model solution. Ocean acoustic signal processing has made great strides over the past decade necessitated by the development of quieter submarines and the recent proliferation of diesel powered vessels.

  19. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  20. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    PubMed

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics. PMID:12448345

  1. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  2. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  3. Faraday-effect light-valve arrays for adaptive optical instruments

    SciTech Connect

    Hirleman, E.D.; Dellenback, P.A.

    1987-01-01

    The ability to adapt to a range of measurement conditions by autonomously configuring software or hardware on-line will be an important attribute of next-generation intelligent sensors. This paper reviews the characteristics of spatial light modulators (SLM) with an emphasis on potential integration into adaptive optical instruments. The paper focuses on one type of SLM, a magneto-optic device based on the Faraday effect. Finally, the integration of the Faraday-effect SLM into a laser-diffraction particle-sizing instrument giving it some ability to adapt to the measurement context is discussed.

  4. Simulation of dynamic processes with adaptive neural networks.

    SciTech Connect

    Tzanos, C. P.

    1998-02-03

    Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.

  5. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  6. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  7. Polymer Solidification and Stabilization: Adaptable Processes for Atypical Wastes

    SciTech Connect

    Jensen, C.

    2007-07-01

    Vinyl Ester Styrene (VES) and Advanced Polymer Solidification (APS{sup TM}) processes are used to solidify, stabilize, and immobilize radioactive, pyrophoric and hazardous wastes at US Department of Energy (DOE) and Department of Defense (DOD) sites, and commercial nuclear facilities. A wide range of projects have been accomplished, including in situ immobilization of ion exchange resin and carbon filter media in decommissioned submarines; underwater solidification of zirconium and hafnium machining swarf; solidification of uranium chips; impregnation of depth filters; immobilization of mercury, lead and other hazardous wastes (including paint chips and blasting media); and in situ solidification of submerged demineralizers. Discussion of the adaptability of the VES and APS{sup TM} processes is timely, given the decommissioning work at government sites, and efforts by commercial nuclear plants to reduce inventories of one-of-a-kind wastes. The VES and APS{sup TM} media and processes are highly adaptable to a wide range of waste forms, including liquids, slurries, bead and granular media; as well as metal fines, particles and larger pieces. With the ability to solidify/stabilize liquid wastes using high-speed mixing; wet sludges and solids by low-speed mixing; or bead and granular materials through in situ processing, these polymer will produce a stable, rock-hard product that has the ability to sequester many hazardous waste components and create Class B and C stabilized waste forms for disposal. Technical assessment and approval of these solidification processes and final waste forms have been greatly simplified by exhaustive waste form testing, as well as multiple NRC and CRCPD waste form approvals. (authors)

  8. Precise calibration of a GNSS antenna array for adaptive beamforming applications.

    PubMed

    Daneshmand, Saeed; Sokhandan, Negin; Zaeri-Amirani, Mohammad; Lachapelle, Gérard

    2014-01-01

    The use of global navigation satellite system (GNSS) antenna arrays for applications such as interference counter-measure, attitude determination and signal-to-noise ratio (SNR) enhancement is attracting significant attention. However, precise antenna array calibration remains a major challenge. This paper proposes a new method for calibrating a GNSS antenna array using live signals and an inertial measurement unit (IMU). Moreover, a second method that employs the calibration results for the estimation of steering vectors is also proposed. These two methods are applied to the receiver in two modes, namely calibration and operation. In the calibration mode, a two-stage optimization for precise calibration is used; in the first stage, constant uncertainties are estimated while in the second stage, the dependency of each antenna element gain and phase patterns to the received signal direction of arrival (DOA) is considered for refined calibration. In the operation mode, a low-complexity iterative and fast-converging method is applied to estimate the satellite signal steering vectors using the calibration results. This makes the technique suitable for real-time applications employing a precisely calibrated antenna array. The proposed calibration method is applied to GPS signals to verify its applicability and assess its performance. Furthermore, the data set is used to evaluate the proposed iterative method in the receiver operation mode for two different applications, namely attitude determination and SNR enhancement. PMID:24887043

  9. Array measurements adapted to the number of available sensors: Theoretical and practical approach for ESAC method

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Rosa-Cintas, S.; Rosa-Herranz, J.; Garrido, J.; Peláez, J. A.; Martino, S.; Delgado, J.

    2016-05-01

    Array measurements of ambient noise have become a useful technique to estimate the surface wave dispersion curves and subsequently the subsurface elastic parameters that characterize the studied soil. One of the logistical handicaps associated with this kind of measurements is the requirement of several stations recording at the same time, which limits their applicability in the case of research groups without enough infrastructure resources. In this paper, we describe the theoretical basis of the ESAC method and we deduce how the number of stations needed to implement any array layout can be reduced to only two stations. In this way, we propose a new methodology to implement an N stations array layout by using only M stations (M < N), which will be recording in different positions of the original prearranged N stations geometry at different times. We also provide some practical guidelines to implement the proposed approach and we show different examples where the obtained results confirm the theoretical foundations. Thus, the study carried out reflects that we can use a minimum of 2 stations to deploy any array layout originally designed for higher number of sensors.

  10. Precise Calibration of a GNSS Antenna Array for Adaptive Beamforming Applications

    PubMed Central

    Daneshmand, Saeed; Sokhandan, Negin; Zaeri-Amirani, Mohammad; Lachapelle, Gérard

    2014-01-01

    The use of global navigation satellite system (GNSS) antenna arrays for applications such as interference counter-measure, attitude determination and signal-to-noise ratio (SNR) enhancement is attracting significant attention. However, precise antenna array calibration remains a major challenge. This paper proposes a new method for calibrating a GNSS antenna array using live signals and an inertial measurement unit (IMU). Moreover, a second method that employs the calibration results for the estimation of steering vectors is also proposed. These two methods are applied to the receiver in two modes, namely calibration and operation. In the calibration mode, a two-stage optimization for precise calibration is used; in the first stage, constant uncertainties are estimated while in the second stage, the dependency of each antenna element gain and phase patterns to the received signal direction of arrival (DOA) is considered for refined calibration. In the operation mode, a low-complexity iterative and fast-converging method is applied to estimate the satellite signal steering vectors using the calibration results. This makes the technique suitable for real-time applications employing a precisely calibrated antenna array. The proposed calibration method is applied to GPS signals to verify its applicability and assess its performance. Furthermore, the data set is used to evaluate the proposed iterative method in the receiver operation mode for two different applications, namely attitude determination and SNR enhancement. PMID:24887043

  11. Adaptation of the Biolog Phenotype MicroArrayTM Technology to Profile the Obligate Anaerobe Geobacter metallireducens

    SciTech Connect

    Joyner, Dominique; Fortney, Julian; Chakraborty, Romy; Hazen, Terry

    2010-05-17

    The Biolog OmniLog? Phenotype MicroArray (PM) plate technology was successfully adapted to generate a select phenotypic profile of the strict anaerobe Geobacter metallireducens (G.m.). The profile generated for G.m. provides insight into the chemical sensitivity of the organism as well as some of its metabolic capabilities when grown with a basal medium containing acetate and Fe(III). The PM technology was developed for aerobic organisms. The reduction of a tetrazolium dye by the test organism represents metabolic activity on the array which is detected and measured by the OmniLog(R) system. We have previously adapted the technology for the anaerobic sulfate reducing bacterium Desulfovibrio vulgaris. In this work, we have taken the technology a step further by adapting it for the iron reducing obligate anaerobe Geobacter metallireducens. In an osmotic stress microarray it was determined that the organism has higher sensitivity to impermeable solutes 3-6percent KCl and 2-5percent NaNO3 that result in osmotic stress by osmosis to the cell than to permeable non-ionic solutes represented by 5-20percent ethylene glycol and 2-3percent urea. The osmotic stress microarray also includes an array of osmoprotectants and precursor molecules that were screened to identify substrates that would provide osmotic protection to NaCl stress. None of the substrates tested conferred resistance to elevated concentrations of salt. Verification studies in which G.m. was grown in defined medium amended with 100mM NaCl (MIC) and the common osmoprotectants betaine, glycine and proline supported the PM findings. Further verification was done by analysis of transcriptomic profiles of G.m. grown under 100mM NaCl stress that revealed up-regulation of genes related to degradation rather than accumulation of the above-mentioned osmoprotectants. The phenotypic profile, supported by additional analysis indicates that the accumulation of these osmoprotectants as a response to salt stress does not

  12. Adaptive memory: enhanced location memory after survival processing.

    PubMed

    Nairne, James S; Vanarsdall, Joshua E; Pandeirada, Josefa N S; Blunt, Janell R

    2012-03-01

    Two experiments investigated whether survival processing enhances memory for location. From an adaptive perspective, remembering that food has been located in a particular area, or that potential predators are likely to be found in a given territory, should increase the chances of subsequent survival. Participants were shown pictures of food or animals located at various positions on a computer screen. The task was to rate the ease of collecting the food or capturing the animals relative to a central fixation point. Surprise retention tests revealed that people remembered the locations of the items better when the collection or capturing task was described as relevant to survival. These data extend the generality of survival processing advantages to a new domain (location memory) by means of a task that does not involve rating the relevance of words to a scenario. PMID:22004268

  13. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  14. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  15. Model-based Processing of Microcantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Sinensky, A K; Lee, C L; Rudd, R E; Burnham, A K

    2005-04-27

    We have developed a model-based processor (MBP) for a microcantilever-array sensor to detect target species in solution. We perform a proof-of-concept experiment, fit model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest, averaged deflection data and multi-channel data. For this evaluation we extract model parameters via a model-based estimation, perform a Gauss-Markov simulation, design the optimal MBP and apply it to measured experimental data. The performance of the MBP in the multi-channel case is evaluated by comparison to a ''smoother'' (averager) typically used for microcantilever signal analysis. It is shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, apart from a correctable systematic bias error.

  16. A basic experimental study of ultrasonic assisted hot embossing process for rapid fabrication of microlens arrays

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Yu, Che-Hao

    2015-02-01

    This paper reports a highly effective technique for rapid fabrication of microlens arrays based on an ultrasonic assisted hot embossing process. In this method, a thin stainless steel mold with micro-holes array is fabricated by a photolithography and wet etching process. Then, the thin stainless steel mold with micro-holes array is placed on top of a plastic substrate (PMMA plate) and the stack is placed in an ultrasonic vibration embossing machine. During ultrasonic assisted hot embossing operation, the surface of the stainless steel mold with micro-holes array presses against the thermoplastic PMMA substrate. Under proper ultrasonic vibration time, embossing pressure and hold time, the softened polymer will just partially fill the circular holes and due to surface tension, form a convex lens surface. After the stainless steel mold is removed, the microlens array patterns on the surface of plastic substrate can be obtained. The total cycle time is less than 10 s. Finally, geometrical and optical properties of the fabricated plastic microlens arrays were measured and proved satisfactory. This technique shows great potential for fabricating microlens array on plastic substrates with high productivity and low cost.

  17. Hybridization process for back-illuminated silicon Geiger-mode avalanche photodiode arrays

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Westhoff, Richard C.; Loomis, Andrew H.; Young, Douglas J.; Ciampi, Joseph S.; Aull, Brian F.; Reich, Robert K.

    2010-04-01

    We present a unique hybridization process that permits high-performance back-illuminated silicon Geiger-mode avalanche photodiodes (GM-APDs) to be bonded to custom CMOS readout integrated circuits (ROICs) - a hybridization approach that enables independent optimization of the GM-APD arrays and the ROICs. The process includes oxide bonding of silicon GM-APD arrays to a transparent support substrate followed by indium bump bonding of this layer to a signal-processing ROIC. This hybrid detector approach can be used to fabricate imagers with high-fill-factor pixels and enhanced quantum efficiency in the near infrared as well as large-pixel-count, small-pixel-pitch arrays with pixel-level signal processing. In addition, the oxide bonding is compatible with high-temperature processing steps that can be used to lower dark current and improve optical response in the ultraviolet.

  18. Outline of a multiple-access communication network based on adaptive arrays

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1982-01-01

    Attention is given to a narrow-band communication system consisting of a central station trying to receive signals simultaneously from K spatially distinct mobile users sharing the same frequencies. One example of such a system is a group of aircraft and ships transmitting messages to a communication satellite. A reasonable approach to such a multiple access system may be based on equipping the central station with an n-element antenna array where n is equal to or greater than K. The array employs K sets of n weights to segregate the signals received from the K users. The weights are determined by direct computation based on position information transmitted by the users. A description is presented of an improved technique which makes it possible to reduce significantly the number of required computer operations in comparison to currently known techniques.

  19. An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study.

    PubMed

    Weinstein, Ronald S; Descour, Michael R; Liang, Chen; Barker, Gail; Scott, Katherine M; Richter, Lynne; Krupinski, Elizabeth A; Bhattacharyya, Achyut K; Davis, John R; Graham, Anna R; Rennels, Margaret; Russum, William C; Goodall, James F; Zhou, Pixuan; Olszak, Artur G; Williams, Bruce H; Wyant, James C; Bartels, Peter H

    2004-11-01

    This paper describes the design and fabrication of a novel array microscope for the first ultrarapid virtual slide processor (DMetrix DX-40 digital slide scanner). The array microscope optics consists of a stack of three 80-element 10 x 8-lenslet arrays, constituting a "lenslet array ensemble." The lenslet array ensemble is positioned over a glass slide. Uniquely shaped lenses in each of the lenslet arrays, arranged perpendicular to the glass slide constitute a single "miniaturized microscope." A high-pixel-density image sensor is attached to the top of the lenslet array ensemble. In operation, the lenslet array ensemble is transported by a motorized mechanism relative to the long axis of a glass slide. Each of the 80 miniaturized microscopes has a lateral field of view of 250 microns. The microscopes of each row of the array are offset from the microscopes in other rows. Scanning a glass slide with the array microscope produces seamless two-dimensional image data of the entire slide, that is, a virtual slide. The optical system has a numerical aperture of N.A.= 0.65, scans slides at a rate of 3 mm per second, and accrues up to 3,000 images per second from each of the 80 miniaturized microscopes. In the ultrarapid virtual slide processing cycle, the time for image acquisition takes 58 seconds for a 2.25 cm2 tissue section. An automatic slide loader enables the scanner to process up to 40 slides per hour without operator intervention. Slide scanning and image processing are done concurrently so that post-scan processing is eliminated. A virtual slide can be viewed over the Internet immediately after the scanning is complete. A validation study compared the diagnostic accuracy of pathologist case readers using array microscopy (with images viewed as virtual slides) and conventional light microscopy. Four senior pathologists diagnosed 30 breast surgical pathology cases each using both imaging modes, but on separate occasions. Of 120 case reads by array microscopy

  20. Adapting the transtheoretical model of change to the bereavement process.

    PubMed

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion. PMID:21553574

  1. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    Imaging non-uniformity of infrared focal plane array (IRFPA) behaves as fixed-pattern noise superimposed on the image, which affects the imaging quality of infrared system seriously. In scene-based non-uniformity correction methods, the drawbacks of ghosting artifacts and image blurring affect the sensitivity of the IRFPA imaging system seriously and decrease the image quality visibly. This paper proposes an improved neural network non-uniformity correction method with adaptive learning rate. On the one hand, using guided filter, the proposed algorithm decreases the effect of ghosting artifacts. On the other hand, due to the inappropriate learning rate is the main reason of image blurring, the proposed algorithm utilizes an adaptive learning rate with a temporal domain factor to eliminate the effect of image blurring. In short, the proposed algorithm combines the merits of the guided filter and the adaptive learning rate. Several real and simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. The experiment results indicate that the proposed algorithm can not only reduce the non-uniformity with less ghosting artifacts but also overcome the problems of image blurring in static areas.

  2. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  3. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  4. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  5. Fast iterative adaptive nonuniformity correction with gradient minimization for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Jufeng; Gao, Xiumin; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2014-07-01

    A fast scene-based nonuniformity correction algorithm is proposed for fixed-pattern noise removal in infrared focal plane array imagery. Based on minimization of L0 gradient of the estimated irradiance, the correction function is optimized through correction parameters estimation via iterative optimization strategy. When applied to different real IR data, the proposed method provides enhanced results with good visual effect, making a good balance between nonuniformity correction and details preservation. Comparing with other excellent approaches, this algorithm can accurately estimate the irradiance rapidly with fewer ghosting artifacts.

  6. Mathematical Modeling of a Solar Arrays Deploying Process at Ground Tests

    NASA Astrophysics Data System (ADS)

    Tomilin, A.; Shpyakin, I.

    2016-04-01

    This paper focuses on the creating of a mathematical model of a solar array deploying process during ground tests. Lagrange equation was used to obtain the math model. The distinctive feature of this mathematical model is the possibility of taking into account the gravity compensation system influence on the construction in the deploying process and the aerodynamic resistance during ground tests.

  7. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance

    PubMed Central

    Shelhamer, Mark

    2014-01-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. PMID:24598520

  8. Asymmetric magnetization reversal process in Co nanohill arrays

    SciTech Connect

    Rosa, W. O.; Martinez, L.; Jaafar, M.; Asenjo, A.; Vazquez, M.

    2009-11-15

    Co thin films deposited by sputtering onto nanostructured polymer [poly(methyl methacrylate)] were prepared following replica-antireplica process based on porous alumina membrane. In addition, different capping layers were deposited onto Co nanohills. Morphological and compositional analysis was performed by atomic force microscopy and x-ray photoemission spectroscopy techniques to obtain information about the surface characteristics. The observed asymmetry in the magnetization reversal process at low temperatures is ascribed to the exchange bias generated by the ferromagnetic-antiferromagnetic interface promoted by the presence of Co oxide detected in all the samples. Especially relevant is the case of the Cr capping, where an enhanced magnetic anisotropy in the Co/Cr interface is deduced.

  9. Design, processing and testing of LSI arrays: Hybrid microelectronics task

    NASA Technical Reports Server (NTRS)

    Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.

    1979-01-01

    Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.

  10. A comparison of deghosting techniques in adaptive nonuniformity correction for IR focal-plane array systems

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Diani, Marco; Corsini, Giovanni

    2010-10-01

    Focal-plane array (FPA) IR systems are affected by fixed-pattern noise (FPN) which is caused by the nonuniformity of the responses of the detectors that compose the array. Due to the slow temporal drift of FPN, several scene-based nonuniformity correction (NUC) techniques have been developed that operate calibration during the acquisition only by means of the collected data. Unfortunately, such algorithms are affected by a collateral damaging problem: ghosting-like artifacts are generated by the edges in the scene and appear as a reverse image in the original position. In this paper, we compare the performance of representative methods for reducing ghosting. Such methods relate to the least mean square (LMS)-based NUC algorithm proposed by D.A. Scribner. In particular, attention is focused on a recently proposed technique which is based on the computation of the temporal statistics of the error signal in the aforementioned LMS-NUC algorithm. In this work, the performances of the deghosting techniques have been investigated by means of IR data corrupted with simulated nonuniformity noise over the detectors of the FPA. Finally, we have made some considerations on the computational aspect which is a challenging task for the employment of such techniques in real-time systems.

  11. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  12. Computationally Efficient Locally Adaptive Demosaicing of Color Filter Array Images Using the Dual-Tree Complex Wavelet Packet Transform

    PubMed Central

    Aelterman, Jan; Goossens, Bart; De Vylder, Jonas; Pižurica, Aleksandra; Philips, Wilfried

    2013-01-01

    Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. PMID:23671575

  13. Recasting Hope: a process of adaptation following fetal anomaly diagnosis.

    PubMed

    Lalor, Joan; Begley, Cecily M; Galavan, Eoin

    2009-02-01

    Recent decades have seen ultrasound revolutionise the management of pregnancy and its possible complications. However, somewhat less consideration has been given to the psychosocial consequences of mass screening resulting in fetal anomaly detection in low-risk populations, particularly in contexts where termination of pregnancy services are not readily accessible. A grounded theory study was conducted exploring forty-one women's experiences of ultrasound diagnosis of fetal abnormality up to and beyond the birth in the Republic of Ireland. Thirty-one women chose to continue the pregnancy and ten women accessed termination of pregnancy services outside the state. Data were collected using repeated in-depth individual interviews pre- and post-birth and analysed using the constant comparative method. Recasting Hope, the process of adaptation following diagnosis is represented temporally as four phases: 'Assume Normal', 'Shock', 'Gaining Meaning' and 'Rebuilding'. Some mothers expressed a sense of incredulity when informed of the anomaly and the 'Assume Normal' phase provides an improved understanding as to why women remain unprepared for an adverse diagnosis. Transition to phase 2, 'Shock,' is characterised by receiving the diagnosis and makes explicit women's initial reactions. Once the diagnosis is confirmed, a process of 'Gaining Meaning' commences, whereby an attempt to make sense of this ostensibly negative event begins. 'Rebuilding', the final stage in the process, is concerned with the extent to which women recover from the loss and resolve the inconsistency between their experience and their previous expectations of pregnancy in particular and beliefs in the world in general. This theory contributes to the theoretical field of thanatology as applied to the process of grieving associated with the loss of an ideal child. The framework of Recasting Hope is intended for use as a tool to assist health professionals through offering simple yet effective

  14. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  15. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  16. Two step process for the fabrication of diffraction limited concave microlens arrays.

    PubMed

    Ruffieux, Patrick; Scharf, Toralf; Philipoussis, Irène; Herzig, Hans Peter; Voelkel, Reinhard; Weible, Kenneth J

    2008-11-24

    A two step process has been developed for the fabrication of diffraction limited concave microlens arrays. The process is based on the photoresist filling of melted holes obtained by a preliminary photolithography step. The quality of these microlenses has been tested in a Mach-Zehnder interferometer. The method allows the fabrication of concave microlens arrays with diffraction limited optical performance. Concave microlenses with diameters ranging between 30 microm to 230 microm and numerical apertures up to 0.25 have been demonstrated. As an example, we present the realization of diffusers obtained with random sizes and locations of concave shapes. PMID:19030040

  17. Adaptive lenticular microlens array based on voltage-induced waves at the surface of polyvinyl chloride/dibutyl phthalate gels.

    PubMed

    Xu, Miao; Jin, Boya; He, Rui; Ren, Hongwen

    2016-04-18

    We report a new approach to preparing a lenticular microlens array (LMA) using polyvinyl chloride (PVC)/dibutyl phthalate (DBP) gels. The PVD/DBP gels coated on a glass substrate form a membrane. With the aid of electrostatic repulsive force, the surface of the membrane can be reconfigured with sinusoidal waves by a DC voltage. The membrane with wavy surface functions as a LMA. By switching over the anode and cathode, the convex shape of each lenticular microlens in the array can be converted to the concave shape. Therefore, the LMA can present a large dynamic range. The response time is relatively fast and the driving voltage is low. With the advantages of compact structure, optical isotropy, and good mechanical stability, our LMA has potential applications in imaging, information processing, biometrics, and displays. PMID:27137253

  18. Adaptive memory: evaluating alternative forms of fitness-relevant processing in the survival processing paradigm.

    PubMed

    Sandry, Joshua; Trafimow, David; Marks, Michael J; Rice, Stephen

    2013-01-01

    Memory may have evolved to preserve information processed in terms of its fitness-relevance. Based on the assumption that the human mind comprises different fitness-relevant adaptive mechanisms contributing to survival and reproductive success, we compared alternative fitness-relevant processing scenarios with survival processing. Participants rated words for relevancy to fitness-relevant and control conditions followed by a delay and surprise recall test (Experiment 1a). Participants recalled more words processed for their relevance to a survival situation. We replicated these findings in an online study (Experiment 2) and a study using revised fitness-relevant scenarios (Experiment 3). Across all experiments, we did not find a mnemonic benefit for alternative fitness-relevant processing scenarios, questioning assumptions associated with an evolutionary account of remembering. Based on these results, fitness-relevance seems to be too wide-ranging of a construct to account for the memory findings associated with survival processing. We propose that memory may be hierarchically sensitive to fitness-relevant processing instructions. We encourage future researchers to investigate the underlying mechanisms responsible for survival processing effects and work toward developing a taxonomy of adaptive memory. PMID:23585858

  19. Adaptive Memory: Evaluating Alternative Forms of Fitness-Relevant Processing in the Survival Processing Paradigm

    PubMed Central

    Sandry, Joshua; Trafimow, David; Marks, Michael J.; Rice, Stephen

    2013-01-01

    Memory may have evolved to preserve information processed in terms of its fitness-relevance. Based on the assumption that the human mind comprises different fitness-relevant adaptive mechanisms contributing to survival and reproductive success, we compared alternative fitness-relevant processing scenarios with survival processing. Participants rated words for relevancy to fitness-relevant and control conditions followed by a delay and surprise recall test (Experiment 1a). Participants recalled more words processed for their relevance to a survival situation. We replicated these findings in an online study (Experiment 2) and a study using revised fitness-relevant scenarios (Experiment 3). Across all experiments, we did not find a mnemonic benefit for alternative fitness-relevant processing scenarios, questioning assumptions associated with an evolutionary account of remembering. Based on these results, fitness-relevance seems to be too wide-ranging of a construct to account for the memory findings associated with survival processing. We propose that memory may be hierarchically sensitive to fitness-relevant processing instructions. We encourage future researchers to investigate the underlying mechanisms responsible for survival processing effects and work toward developing a taxonomy of adaptive memory. PMID:23585858

  20. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  1. Intermarriages between Western Women and Palestinian Men: Multidirectional Adaptation Processes

    ERIC Educational Resources Information Center

    Roer-Strier, Dorit; Ezra, Dina Ben

    2006-01-01

    This article addresses cultural adaptation of Western-Palestinian intermarried couples. Using in-depth interviews, information was gathered from 16 participants, 7 Western women and 9 Palestinian men, living in Palestinian cities in the West Bank. Adaptation strategies are typified by the extent to which each spouse embraces the partner's culture.…

  2. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  3. Effects of process parameters on the molding quality of the micro-needle array

    NASA Astrophysics Data System (ADS)

    Qiu, Z. J.; Ma, Z.; Gao, S.

    2016-07-01

    Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.

  4. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  5. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  6. Systolic arrays for binary image processing by using Boolean differential operators

    NASA Astrophysics Data System (ADS)

    Shmerko, V. P.; Yanushkevich, S. N.; Kochergov, E. G.

    1993-11-01

    A matrix form of the Boolean differential temporal (parametric) operators is proposed. The procedures of preliminary binary image processing (logic filtering, finding of contours) are constructed on this base. This presentation of the operators allows to synthesize the algorithms having a mapping into an architecture of systolic arrays.

  7. Assembly, integration, and verification (AIV) in ALMA: series processing of array elements

    NASA Astrophysics Data System (ADS)

    Lopez, Bernhard; Jager, Rieks; Whyborn, Nicholas D.; Knee, Lewis B. G.; McMullin, Joseph P.

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA will consist of at least 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. It is the responsibility of ALMA AIV to deliver the fully assembled, integrated, and verified antennas (array elements) to the telescope array. After an initial phase of infrastructure setup AIV activities began when the first ALMA antenna and subsystems became available in mid 2008. During the second semester of 2009 a project-wide effort was made to put in operation a first 3- antenna interferometer at the Array Operations Site (AOS). In 2010 the AIV focus was the transition from event-driven activities towards routine series production. Also, due to the ramp-up of operations activities, AIV underwent an organizational change from an autonomous department into a project within a strong matrix management structure. When the subsystem deliveries stabilized in early 2011, steady-state series processing could be achieved in an efficient and reliable manner. The challenge today is to maintain this production pace until completion towards the end of 2013. This paper describes the way ALMA AIV evolved successfully from the initial phase to the present steady-state of array element series processing. It elaborates on the different project phases and their relationships, presents processing statistics, illustrates the lessons learned and relevant best practices, and concludes with an outlook of the path towards completion.

  8. MagicPlate-512: A 2D silicon detector array for quality assurance of stereotactic motion adaptive radiotherapy

    SciTech Connect

    Petasecca, M. Newall, M. K.; Aldosari, A. H.; Fuduli, I.; Espinoza, A. A.; Porumb, C. S.; Guatelli, S.; Metcalfe, P.; Lerch, M. L. F.; Rosenfeld, A. B.; Booth, J. T.; Colvill, E.; Duncan, M.; Cammarano, D.; Carolan, M.; Oborn, B.; Perevertaylo, V.; Keall, P. J.

    2015-06-15

    Purpose: Spatial and temporal resolutions are two of the most important features for quality assurance instrumentation of motion adaptive radiotherapy modalities. The goal of this work is to characterize the performance of the 2D high spatial resolution monolithic silicon diode array named “MagicPlate-512” for quality assurance of stereotactic body radiation therapy (SBRT) and stereotactic radiosurgery (SRS) combined with a dynamic multileaf collimator (MLC) tracking technique for motion compensation. Methods: MagicPlate-512 is used in combination with the movable platform HexaMotion and a research version of radiofrequency tracking system Calypso driving MLC tracking software. The authors reconstruct 2D dose distributions of small field square beams in three modalities: in static conditions, mimicking the temporal movement pattern of a lung tumor and tracking the moving target while the MLC compensates almost instantaneously for the tumor displacement. Use of Calypso in combination with MagicPlate-512 requires a proper radiofrequency interference shielding. Impact of the shielding on dosimetry has been simulated by GEANT4 and verified experimentally. Temporal and spatial resolutions of the dosimetry system allow also for accurate verification of segments of complex stereotactic radiotherapy plans with identification of the instant and location where a certain dose is delivered. This feature allows for retrospective temporal reconstruction of the delivery process and easy identification of error in the tracking or the multileaf collimator driving systems. A sliding MLC wedge combined with the lung motion pattern has been measured. The ability of the MagicPlate-512 (MP512) in 2D dose mapping in all three modes of operation was benchmarked by EBT3 film. Results: Full width at half maximum and penumbra of the moving and stationary dose profiles measured by EBT3 film and MagicPlate-512 confirm that motion has a significant impact on the dose distribution. Motion

  9. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  10. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only

    PubMed Central

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F2/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  11. High Density Crossbar Arrays with Sub- 15 nm Single Cells via Liftoff Process Only.

    PubMed

    Khiat, Ali; Ayliffe, Peter; Prodromakis, Themistoklis

    2016-01-01

    Emerging nano-scale technologies are pushing the fabrication boundaries at their limits, for leveraging an even higher density of nano-devices towards reaching 4F(2)/cell footprint in 3D arrays. Here, we study the liftoff process limits to achieve extreme dense nanowires while ensuring preservation of thin film quality. The proposed method is optimized for attaining a multiple layer fabrication to reliably achieve 3D nano-device stacks of 32 × 32 nanowire arrays across 6-inch wafer, using electron beam lithography at 100 kV and polymethyl methacrylate (PMMA) resist at different thicknesses. The resist thickness and its geometric profile after development were identified to be the major limiting factors, and suggestions for addressing these issues are provided. Multiple layers were successfully achieved to fabricate arrays of 1 Ki cells that have sub- 15 nm nanowires distant by 28 nm across 6-inch wafer. PMID:27585643

  12. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  13. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    NASA Astrophysics Data System (ADS)

    Warburton, W. K.; Harris, J. T.; Friedrich, S.

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100-2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays - currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I-V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  14. Fabrication of microlens arrays by a rolling process with soft polydimethylsiloxane molds

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Hsieh, Hsin-Ta; Su, Guo-Dung John

    2011-06-01

    In this paper, we present a new roll-to-roll method to fabricate visible light transparent microlens arrays on a glass substrate by using soft and cost-effective polydimethylsiloxane (PDMS) molds. First, we fabricated microlens array master molds by photoresist thermal reflow processes on silicon substrates. We then transferred the pattern to PDMS molds by a spin coater. After making the PDMS molds, we used a two-wheel roll-to-roll printing machine to replicate ultraviolet resin microlens arrays on glass substrates. The PDMS molds can be made easily at a low cost compared with traditional electroplating metal molds. We studied the quality of microlens arrays that were replicated by different rolling pressures of 20, 200 and 500 N cm-2. We also identified the relation between the pressure and the shape of the microlens arrays. The results showed that the best yield rate and replication performance were achieved with a pressure of approximately 200 N cm-2 and 4 min of ultraviolet light exposure.

  15. Analysis of dynamic deformation processes with adaptive KALMAN-filtering

    NASA Astrophysics Data System (ADS)

    Eichhorn, Andreas

    2007-05-01

    In this paper the approach of a full system analysis is shown quantifying a dynamic structural ("white-box"-) model for the calculation of thermal deformations of bar-shaped machine elements. The task was motivated from mechanical engineering searching new methods for the precise prediction and computational compensation of thermal influences in the heating and cooling phases of machine tools (i.e. robot arms, etc.). The quantification of thermal deformations under variable dynamic loads requires the modelling of the non-stationary spatial temperature distribution inside the object. Based upon FOURIERS law of heat flow the high-grade non-linear temperature gradient is represented by a system of partial differential equations within the framework of a dynamic Finite Element topology. It is shown that adaptive KALMAN-filtering is suitable to quantify relevant disturbance influences and to identify thermal parameters (i.e. thermal diffusivity) with a deviation of only 0,2%. As result an identified (and verified) parametric model for the realistic prediction respectively simulation of dynamic temperature processes is presented. Classifying the thermal bend as the main deformation quantity of bar-shaped machine tools, the temperature model is extended to a temperature deformation model. In lab tests thermal load steps are applied to an aluminum column. Independent control measurements show that the identified model can be used to predict the columns bend with a mean deviation (r.m.s.) smaller than 10 mgon. These results show that the deformation model is a precise predictor and suitable for realistic simulations of thermal deformations. Experiments with modified heat sources will be necessary to verify the model in further frequency spectra of dynamic thermal loads.

  16. High resolution beamforming on large aperture vertical line arrays: Processing synthetic data

    NASA Astrophysics Data System (ADS)

    Tran, Jean-Marie Q.; Hodgkiss, William S.

    1990-09-01

    This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.

  17. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel.

    PubMed

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate's pre-patterning process. By modifying the substrate's wettability, the conducting polymer's contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  18. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    PubMed Central

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-01-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics. PMID:27378163

  19. Flexible All-organic, All-solution Processed Thin Film Transistor Array with Ultrashort Channel

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Hu, Zhanhao; Liu, Huimin; Lan, Linfeng; Peng, Junbiao; Wang, Jian; Cao, Yong

    2016-07-01

    Shrinking the device dimension has long been the pursuit of the semiconductor industry to increase the device density and operation speed. In the application of thin film transistors (TFTs), all-organic TFT arrays made by all-solution process are desired for low cost and flexible electronics. One of the greatest challenges is how to achieve ultrashort channel through a cost-effective method. In our study, ultrashort-channel devices are demonstrated by direct inkjet printing conducting polymer as source/drain and gate electrodes without any complicated substrate’s pre-patterning process. By modifying the substrate’s wettability, the conducting polymer’s contact line is pinned during drying process which makes the channel length well-controlled. An organic TFT array of 200 devices with 2 μm channel length is fabricated on flexible substrate through all-solution process. The simple and scalable process to fabricate high resolution organic transistor array offers a low cost approach in the development of flexible and wearable electronics.

  20. Comprehensive exon array data processing method for quantitative analysis of alternative spliced variants

    PubMed Central

    Chen, Ping; Lepikhova, Tatiana; Hu, Yizhou; Monni, Outi; Hautaniemi, Sampsa

    2011-01-01

    Alternative splicing of pre-mRNA generates protein diversity. Dysfunction of splicing machinery and expression of specific transcripts has been linked to cancer progression and drug response. Exon microarray technology enables genome-wide quantification of expression levels of the majority of exons and facilitates the discovery of alternative splicing events. Analysis of exon array data is more challenging than the analysis of gene expression data and there is a need for reliable quantification of exons and alternatively spliced variants. We introduce a novel, computationally efficient methodology, Multiple Exon Array Preprocessing (MEAP), for exon array data pre-processing, analysis and visualization. We compared MEAP with existing pre-processing methods, and validation of six exons and two alternatively spliced variants with qPCR corroborated MEAP expression estimates. Analysis of exon array data from head and neck squamous cell carcinoma (HNSCC) cell lines revealed several transcripts associated with 11q13 amplification, which is related with decreased survival and metastasis in HNSCC patients. Our results demonstrate that MEAP produces reliable expression values at exon, alternatively spliced variant and gene levels, which allows generating novel experimentally testable predictions. PMID:21745820

  1. Extension of DAMAS Phased Array Processing for Spatial Coherence Determination (DAMAS-C)

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M., Jr.

    2006-01-01

    The present study reports a new development of the DAMAS microphone phased array processing methodology that allows the determination and separation of coherent and incoherent noise source distributions. In 2004, a Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decoupled the array design and processing influence from the noise being measured, using a simple and robust algorithm. In 2005, three-dimensional applications of DAMAS were examined. DAMAS has been shown to render an unambiguous quantitative determination of acoustic source position and strength. However, an underlying premise of DAMAS, as well as that of classical array beamforming methodology, is that the noise regions under study are distributions of statistically independent sources. The present development, called DAMAS-C, extends the basic approach to include coherence definition between noise sources. The solutions incorporate cross-beamforming array measurements over the survey region. While the resulting inverse problem can be large and the iteration solution computationally demanding, it solves problems no other technique can approach. DAMAS-C is validated using noise source simulations and is applied to airframe flap noise test results.

  2. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    PubMed Central

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-01-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V−1 s−1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V−1 s−1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays. PMID:24492785

  3. Fabrication of hybrid nanostructured arrays using a PDMS/PDMS replication process.

    PubMed

    Hassanin, H; Mohammadkhani, A; Jiang, K

    2012-10-21

    In the study, a novel and low cost nanofabrication process is proposed for producing hybrid polydimethylsiloxane (PDMS) nanostructured arrays. The proposed process involves monolayer self-assembly of polystyrene (PS) spheres, PDMS nanoreplication, thin film coating, and PDMS to PDMS (PDMS/PDMS) replication. A self-assembled monolayer of PS spheres is used as the first template. Second, a PDMS template is achieved by replica moulding. Third, the PDMS template is coated with a platinum or gold layer. Finally, a PDMS nanostructured array is developed by casting PDMS slurry on top of the coated PDMS. The cured PDMS is peeled off and used as a replica surface. In this study, the influences of the coating on the PDMS topography, contact angle of the PDMS slurry and the peeling off ability are discussed in detail. From experimental evaluation, a thickness of at least 20 nm gold layer or 40 nm platinum layer on the surface of the PDMS template improves the contact angle and eases peeling off. The coated PDMS surface is successfully used as a template to achieve the replica with a uniform array via PDMS/PDMS replication process. Both the PDMS template and the replica are free of defects and also undistorted after demoulding with a highly ordered hexagonal arrangement. In addition, the geometry of the nanostructured PDMS can be controlled by changing the thickness of the deposited layer. The simplicity and the controllability of the process show great promise as a robust nanoreplication method for functional applications. PMID:22868401

  4. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  5. A flexible implementation for Doppler radar to verify various base-band array signal processing algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Eunjung; Lee, Jonghyun; Jung, Byungwook; Chun, Joohwan

    2005-09-01

    We describe a flexible hardware system of the Doppler radar which is designed to verify various baseband array signal processing algorithms. In this work we design the Doppler radar system simulator for baseband signal processing in laboratory level. Based on this baseband signal processor, a PN-code pulse doppler radar simulator is developed. More specifically, this simulator consists of an echo signal generation part and a signal processing part. For the echo signal generation part, we use active array structure with 4 elements, and adopt baker coded PCM signal in transmission and reception for digital pulse compression. In the signal processing part, we first transform RF radar pulse to the baseband signal because we use the basebands algorithms using IF sampling. Various digital beamforming algorithms can be adopted as a baseband algorithm in our simulator. We mainly use Multiple Sidelobe Canceller (MSC) with main array antenna elements and auxiliary antenna elements as beamforming and sidelobe canceller algorithm. For Doppler filtering algorithms, we use the FFT. A control set is necessary to control overall system and to manage the timing schedule for the operation.

  6. Fully Solution-Processed Flexible Organic Thin Film Transistor Arrays with High Mobility and Exceptional Uniformity

    NASA Astrophysics Data System (ADS)

    Fukuda, Kenjiro; Takeda, Yasunori; Mizukami, Makoto; Kumaki, Daisuke; Tokito, Shizuo

    2014-02-01

    Printing fully solution-processed organic electronic devices may potentially revolutionize production of flexible electronics for various applications. However, difficulties in forming thin, flat, uniform films through printing techniques have been responsible for poor device performance and low yields. Here, we report on fully solution-processed organic thin-film transistor (TFT) arrays with greatly improved performance and yields, achieved by layering solution-processable materials such as silver nanoparticle inks, organic semiconductors, and insulating polymers on thin plastic films. A treatment layer improves carrier injection between the source/drain electrodes and the semiconducting layer and dramatically reduces contact resistance. Furthermore, an organic semiconductor with large-crystal grains results in TFT devices with shorter channel lengths and higher field-effect mobilities. We obtained mobilities of over 1.2 cm2 V-1 s-1 in TFT devices with channel lengths shorter than 20 μm. By combining these fabrication techniques, we built highly uniform organic TFT arrays with average mobility levels as high as 0.80 cm2 V-1 s-1 and ideal threshold voltages of 0 V. These results represent major progress in the fabrication of fully solution-processed organic TFT device arrays.

  7. Adaptivity and Autonomy Development in a Learning Personalization Process

    ERIC Educational Resources Information Center

    Verpoorten, D.

    2009-01-01

    Within the iClass (Integrated Project 507922) and Enhanced Learning Experience and Knowledge Transfer (ELEKTRA; Specific Targeted Research or Innovation Project 027986) European projects, the author was requested to harness his pedagogical knowledge to the production of educational adaptive systems. The article identifies and documents the…

  8. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  9. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  10. Optimization of lithography process for the fabrication of Micro-Faraday cup array

    NASA Astrophysics Data System (ADS)

    Arab, J. M.; Brahmankar, P. K.; Pawade, R. S.; Srivastava, A. K.

    2016-05-01

    Micro-faraday cup array detector (MFCAD) is used for the detection of charge of incoming ions in mass spectrometry. The optimization of complete lithography process for the fabrication of Micro-Faraday cup array detector structure in photoresist (AZ4903) on silicon substrate is reported in this work. An UV-LED based exposure system is designed for the transfer of micro-faraday cup structure on to the photoresist. The assembly consists of exposure system, collimating lens and mask/substrate holder. The fabrication process consists of coating of photoresist on Silicon substrate, designing and printing the photo mask and finally the UV lithography. These fabricated structures are characterized using optical microscope. The dimensions achieved are found to be similar as compared to the photo mask.

  11. Microlens array production in a microtechnological dry etch and reflow process for display applications

    NASA Astrophysics Data System (ADS)

    Knieling, T.; Shafi, M.; Lang, W.; Benecke, W.

    2012-03-01

    The fabrication of arrays consisting of densely ordered circular convex microlenses with diameters of 126 mum made of quartz glass in a photoresist reflow and dry etch structure transition process is demonstrated. The rectangular lens arrays with dimensions of 6 mm x 9 mm were designed for focussing collimated light on the pixel center regions of a translucent interference display, which also was produced in microtechnological process steps. The lenses focus light on pixel centers and thus serve for increasing display brightness and contrast since incoming collimated light is partially blocked by opaque metallic ring contacts at the display pixel edges. The focal lengths of the lenses lie between 0.46 mm and 2.53 mm and were adjusted by varying ratio of the selective dry etch rate of photoresist and quartz glass. Due to volume shrinking and edge line pinning of the photoresist structures the lenses curvatures emerge hyperbolic, leading to improved focussing performance.

  12. Microcavity array plasma system for remote chemical processing at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Sung; Hamaguchi, Satoshi; Sakai, Osamu; Park, Sung-Jin; Eden, J. Gary

    2012-06-01

    A microplasma system designed for chemical processing at atmospheric pressure is fabricated and characterized with flowing He/O2 gas mixtures. At the heart of this microcavity dielectric barrier discharge (MDBD) system are two arrays of half-ellipsoidal microcavities engraved by micropowder blasting into dielectric surfaces facing a flowing, low-temperature plasma. Experiments demonstrate that the ignition voltage is reduced, and the spatially averaged optical emission is doubled, for an MDBD flowing plasma array relative to an equivalent system having no microcavities. As an example of the potential of flowing atmospheric microplasma systems for chemical processing, the decomposition of methylene blue (as evidenced by decoloration at 650.2 nm) is shown to proceed at a rate as much as a factor of two greater than that for a non-microcavity equivalent.

  13. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  14. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  15. Neural Adaptation and Behavioral Measures of Temporal Processing and Speech Perception in Cochlear Implant Recipients

    PubMed Central

    Zhang, Fawen; Benson, Chelsea; Murphy, Dora; Boian, Melissa; Scott, Michael; Keith, Robert; Xiang, Jing; Abbas, Paul

    2013-01-01

    The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the

  16. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    PubMed Central

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  17. In Experts, underlying processes that drive visuomotor adaptation are different than in Novices

    PubMed Central

    Leukel, Christian; Gollhofer, Albert; Taube, Wolfgang

    2015-01-01

    Processes responsible for improvements in motor performance are often contrasted in an explicit and an implicit part. Explicit learning enables task success by using strategic (declarative) knowledge. Implicit learning refers to a change in motor performance without conscious effort. In this study, we tested the contribution of explicit and implicit processes in a visuomotor adaptation task in subjects with different expertise in the task they were asked to adapt. Thirty handball players (Experts) and 30 subjects without handball experience (Novices) participated. Three experiments tested visuomotor adaptation of a free throw in team handball using prismatic glasses. The difference between experiments was that in Experiment 2 and 3, contribution of explicit processes was prevented, whereas Experiment 1 allowed contribution of explicit and implicit processes. Retention was assessed in Experiment 3. There were three main findings: (i) contribution of explicit processes to adaptation was stronger in Experts than Novices (Experiment 1); (ii) adaptation took longer in Experts when preventing contribution of explicit processes (Experiment 2); and (iii) retention was stronger in Experts (Experiment 3). This study shows that learning processes involved in visuomotor adaptation change by expertise, with more involvement of explicit processes and most likely other implicit processes to adaptation in Experts. PMID:25713526

  18. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  19. A Dry-Etch Process for Low Temperature Superconducting Transition Edge Sensors for Far Infrared Bolometer Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Christine A.; Chervenak, James A.; Hsieh, Wen-Ting; McClanahan, Richard A.; Miller, Timothy M.; Mitchell, Robert; Moseley, S. Harvey; Staguhn, Johannes; Stevenson, Thomas R.

    2003-01-01

    The next generation of ultra-low power bolometer arrays, with applications in far infrared imaging, spectroscopy and polarimetry, utilizes a superconducting bilayer as the sensing element to enable SQUID multiplexed readout. Superconducting transition edge sensors (TES s) are being produced with dual metal systems of superconductinghormal bilayers. The transition temperature (Tc) is tuned by altering the relative thickness of the superconductor with respect to the normal layer. We are currently investigating MoAu and MoCu bilayers. We have developed a dry-etching process for MoAu TES s with integrated molybdenum leads, and are working on adapting the process to MoCu. Dry etching has the advantage over wet etching in the MoAu system in that one can achieve a high degree of selectivity, greater than 10, using argon ME, or argon ion milling, for patterning gold on molybdenum. Molybdenum leads are subsequently patterned using fluorine plasma.. The dry-etch technique results in a smooth, featureless TES with sharp sidewalls, no undercutting of the Mo beneath the normal metal, and Mo leads with high critical current. The effects of individual processing parameters on the characteristics of the transition will be reported.

  20. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  1. Improving risk understanding across ability levels: Encouraging active processing with dynamic icon arrays.

    PubMed

    Okan, Yasmina; Garcia-Retamero, Rocio; Cokely, Edward T; Maldonado, Antonio

    2015-06-01

    Icon arrays have been found to improve risk understanding and reduce judgment biases across a wide range of studies. Unfortunately, individuals with low graph literacy experience only limited benefits from such displays. To enhance the efficacy and reach of these decision aids, the authors developed and tested 3 types of dynamic design features--that is, computerized display features that unfold over time. Specifically, the authors manipulated the sequential presentation of the different elements of icon arrays, the presence of explanatory labels indicating what was depicted in the different regions of the arrays, and the use of a reflective question followed by accuracy feedback. The first 2 features were designed to promote specific cognitive processes involved in graph comprehension, whereas the 3rd feature was designed to promote a more active, elaborative processing of risk information. Explanatory labels were effective in improving risk understanding among less graph-literate participants, whereas reflective questions resulted in large and robust performance benefits among participants with both low and high graph literacy. Theoretical and prescriptive implications are discussed. (PsycINFO Database Record PMID:25938975

  2. Phased Arrays Techniques and Split Spectrum Processing for Inspection of Thick Titanium Casting Components

    NASA Astrophysics Data System (ADS)

    Banchet, J.; Sicard, R.; Zellouf, D. E.; Chahbaz, A.

    2003-03-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability.

  3. Media processing with field-programmable gate arrays on a microprocessor's local bus

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.; Lee, Mark; Liu, Yuan-Min; McEniry, Christopher; Nwodoh, Thomas A.; Watlington, John A.

    1998-12-01

    The Chidi system is a PCI-bus media processor card which performs its processing tasks on a large field-programmable gate array (Altera 10K100) in conjunction with a general purpose CPU (PowerPC 604e). Special address-generation and buffering logic (also implemented on FPGAs) allows the reconfigurable processor to share a local bus with the CPU, turning burst accesses to memory into continuous streams and converting between the memory's 64-bit words and the media data types. In this paper we present the design requirements for the Chidi system, describe the hardware architecture, and discuss the software model for its use in media processing.

  4. The Magellan Adaptive Secondary VisAO Camera: diffraction-limited broadband visible imaging and 20mas fiber array IFU

    NASA Astrophysics Data System (ADS)

    Kopon, Derek; Close, Laird M.; Males, Jared; Gasho, Victor; Follette, Katherine

    2010-07-01

    The Magellan Adaptive Secondary AO system, scheduled for first light in the fall of 2011, will be able to simultaneously perform diffraction limited AO science in both the mid-IR, using the BLINC/MIRAC4 10μm camera, and in the visible using our novel VisAO camera. The VisAO camera will be able to operate as either an imager, using a CCD47 with 8.5 mas pixels, or as an IFS, using a custom fiber array at the focal plane with 20 mas elements in its highest resolution mode. In imaging mode, the VisAO camera will have a full suite of filters, coronagraphic focal plane occulting spots, and SDI prism/filters. The imaging mode should provide ~20% mean Strehl diffraction-limited images over the band 0.5-1.0 μm. In IFS mode, the VisAO instrument will provide R~1,800 spectra over the band 0.6-1.05 μm. Our unprecedented 20 mas spatially resolved visible spectra would be the highest spatial resolution achieved to date, either from the ground or in space. We also present lab results from our recently fabricated advanced triplet Atmospheric Dispersion Corrector (ADC) and the design of our novel wide-field acquisition and active optics lens. The advanced ADC is designed to perform 58% better than conventional doublet ADCs and is one of the enabling technologies that will allow us to achieve broadband (0.5-1.0μm) diffraction limited imaging and wavefront sensing in the visible.

  5. Avoiding sensor blindness in Geiger mode avalanche photodiode arrays fabricated in a conventional CMOS process

    NASA Astrophysics Data System (ADS)

    Vilella, E.; Diéguez, A.

    2011-12-01

    The need to move forward in the knowledge of the subatomic world has stimulated the development of new particle colliders. However, the objectives of the next generation of colliders sets unprecedented challenges to the detector performance. The purpose of this contribution is to present a bidimensional array based on avalanche photodiodes operated in the Geiger mode to track high energy particles in future linear colliders. The bidimensional array can function in a gated mode to reduce the probability to detect noise counts interfering with real events. Low reverse overvoltages are used to lessen the dark count rate. Experimental results demonstrate that the prototype fabricated with a standard HV-CMOS process presents an increased efficiency and avoids sensor blindness by applying the proposed techniques.

  6. Adaptation as a Political Process: Adjusting to Drought and Conflict in Kenya's Drylands

    NASA Astrophysics Data System (ADS)

    Eriksen, Siri; Lind, Jeremy

    2009-05-01

    In this article, we argue that people’s adjustments to multiple shocks and changes, such as conflict and drought, are intrinsically political processes that have uneven outcomes. Strengthening local adaptive capacity is a critical component of adapting to climate change. Based on fieldwork in two areas in Kenya, we investigate how people seek to access livelihood adjustment options and promote particular adaptation interests through forming social relations and political alliances to influence collective decision-making. First, we find that, in the face of drought and conflict, relations are formed among individuals, politicians, customary institutions, and government administration aimed at retaining or strengthening power bases in addition to securing material means of survival. Second, national economic and political structures and processes affect local adaptive capacity in fundamental ways, such as through the unequal allocation of resources across regions, development policy biased against pastoralism, and competition for elected political positions. Third, conflict is part and parcel of the adaptation process, not just an external factor inhibiting local adaptation strategies. Fourth, there are relative winners and losers of adaptation, but whether or not local adjustments to drought and conflict compound existing inequalities depends on power relations at multiple geographic scales that shape how conflicting interests are negotiated locally. Climate change adaptation policies are unlikely to be successful or minimize inequity unless the political dimensions of local adaptation are considered; however, existing power structures and conflicts of interests represent political obstacles to developing such policies.

  7. Process development for automated solar cell and module production. Task 4: automated array assembly

    SciTech Connect

    Hagerty, J.J.

    1980-06-30

    The scope of work under this contract involves specifying a process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use. This process sequence is then critically analyzed from a technical and economic standpoint to determine the technological readiness of each process step for implementation. The process steps are ranked according to the degree of development effort required and according to their significance to the overall process. Under this contract the steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development. Economic analysis using the SAMICS system has been performed during these studies to assure that development efforts have been directed towards the ultimate goal of price reduction. Details are given. (WHK)

  8. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  9. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    NASA Astrophysics Data System (ADS)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  10. Context-Aware Design for Process Flexibility and Adaptation

    ERIC Educational Resources Information Center

    Yao, Wen

    2012-01-01

    Today's organizations face continuous and unprecedented changes in their business environment. Traditional process design tools tend to be inflexible and can only support rigidly defined processes (e.g., order processing in the supply chain). This considerably restricts their real-world applications value, especially in the dynamic and…

  11. Adaptive information processing in auditory cortex. Annual report, 1 June 1987-31 May 1988

    SciTech Connect

    Weinberger, N.M.

    1988-05-31

    The fact that learning induces frequency-specific modification of receptive fields in auditory cortex implies that the functional organization of auditory (and perhaps other sensory) cortex comprises an adaptively-constituted information base. This project initiates the first systematic investigation of adaptive information processing in cerebral cortex. A major goal is to determine the circumstances under which adaptive information processing is induced by experience. This project also addresses central hypotheses about rules that govern adaptive information processing, at three levels of spatial scale: (a) parallel processing in different auditory fields: (b) modular processing in different cortical lamina within fields; (c) local processing in different neurons within the same locus within lamina. The author emphasized determining the learning circumstances under which adaptive information processing is invoked by the brain. Current studies reveal that the frequency receptive fields of neurons in the auditory cortex, and the physiologically plastic magnocellular medial geniculate nucleus, develop frequency-specific modification such that maximal shifts in tuning are at or adjacent to the signal frequency. Further, this adaptive re-tuning of neurons develops rapidly during habituation, classical conditioning, and instrumental avoidance conditioning. The generality of re-tuning has established that AIP during learning represents a general brain strategy for the acquisition and subsequent processing of information.

  12. Concurrent processing adaptation of aeroplastic analysis of propfans

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1990-01-01

    Discussed here is a study involving the adaptation of an advanced aeroelastic analysis program to run concurrently on a shared memory multiple processor computer. The program uses a three-dimensional compressible unsteady aerodynamic model and blade normal modes to calculate aeroelastic stability and response of propfan blades. The identification of the computational parallelism within the sequential code and the scheduling of the concurrent subtasks to minimize processor idle time are discussed. Processor idle time in the calculation of the unsteady aerodynamic coefficients was reduced by the simple strategy of appropriately ordering the computations. Speedup and efficiency results are presented for the calculation of the matched flutter point of an experimental propfan model. The results show that efficiencies above 70 percent can be obtained using the present implementation with 7 processors. The parallel computational strategy described here is also applicable to other aeroelastic analysis procedures based on panel methods.

  13. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    PubMed

    Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M

    2011-09-01

    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics

  14. Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.

  15. Evaluation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Array

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula

    2006-01-01

    The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.

  16. The Role of Water Vapor and Dissociative Recombination Processes in Solar Array Arc Initiation

    NASA Technical Reports Server (NTRS)

    Galofar, J.; Vayner, B.; Degroot, W.; Ferguson, D.

    2002-01-01

    Experimental plasma arc investigations involving the onset of arc initiation for a negatively biased solar array immersed in low-density plasma have been performed. Previous studies into the arc initiation process have shown that the most probable arcing sites tend to occur at the triple junction involving the conductor, dielectric and plasma. More recently our own experiments have led us to believe that water vapor is the main causal factor behind the arc initiation process. Assuming the main component of the expelled plasma cloud by weight is water, the fastest process available is dissociative recombination (H2O(+) + e(-) (goes to) H* + OH*). A model that agrees with the observed dependency of arc current pulse width on the square root of capacitance is presented. A 400 MHz digital storage scope and current probe was used to detect arcs at the triple junction of a solar array. Simultaneous measurements of the arc trigger pulse, the gate pulse, the arc current and the arc voltage were then obtained. Finally, a large number of measurements of individual arc spectra were obtained in very short time intervals, ranging from 10 to 30 microseconds, using a 1/4 a spectrometer coupled with a gated intensified CCD. The spectrometer was systematically tuned to obtain optical arc spectra over the entire wavelength range of 260 to 680 nanometers. All relevant atomic lines and molecular bands were then identified.

  17. Automatic defect detection for TFT-LCD array process using quasiconformal kernel support vector data description.

    PubMed

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms. PMID:22016625

  18. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  19. Adaptable Particle-in-Cell Algorithms for Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor; Singh, Tajendra

    2010-11-01

    Emerging computer architectures consist of an increasing number of shared memory computing cores in a chip, often with vector (SIMD) co-processors. Future exascale high performance systems will consist of a hierarchy of such nodes, which will require different algorithms at different levels. Since no one knows exactly how the future will evolve, we have begun development of an adaptable Particle-in-Cell (PIC) code, whose parameters can match different hardware configurations. The data structures reflect three levels of parallelism, contiguous vectors and non-contiguous blocks of vectors, which can share memory, and groups of blocks which do not. Particles are kept ordered at each time step, and the size of a sorting cell is an adjustable parameter. We have implemented a simple 2D electrostatic skeleton code whose inner loop (containing 6 subroutines) runs entirely on the NVIDIA Tesla C1060. We obtained speedups of about 16-25 compared to a 2.66 GHz Intel i7 (Nehalem), depending on the plasma temperature, with an asymptotic limit of 40 for a frozen plasma. We expect speedups of about 70 for an 2D electromagnetic code and about 100 for a 3D electromagnetic code, which have higher computational intensities (more flops/memory access).

  20. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  1. Local adaptation in Trinidadian guppies alters ecosystem processes.

    PubMed

    Bassar, Ronald D; Marshall, Michael C; López-Sepulcre, Andrés; Zandonà, Eugenia; Auer, Sonya K; Travis, Joseph; Pringle, Catherine M; Flecker, Alexander S; Thomas, Steven A; Fraser, Douglas F; Reznick, David N

    2010-02-23

    Theory suggests evolutionary change can significantly influence and act in tandem with ecological forces via ecological-evolutionary feedbacks. This theory assumes that significant evolutionary change occurs over ecologically relevant timescales and that phenotypes have differential effects on the environment. Here we test the hypothesis that local adaptation causes ecosystem structure and function to diverge. We demonstrate that populations of Trinidadian guppies (Poecilia reticulata), characterized by differences in phenotypic and population-level traits, differ in their impact on ecosystem properties. We report results from a replicated, common garden mesocosm experiment and show that differences between guppy phenotypes result in the divergence of ecosystem structure (algal, invertebrate, and detrital standing stocks) and function (gross primary productivity, leaf decomposition rates, and nutrient flux). These phenotypic effects are further modified by effects of guppy density. We evaluated the generality of these effects by replicating the experiment using guppies derived from two independent origins of the phenotype. Finally, we tested the ability of multiple guppy traits to explain observed differences in the mesocosms. Our findings demonstrate that evolution can significantly affect both ecosystem structure and function. The ecosystem differences reported here are consistent with patterns observed across natural streams and argue that guppies play a significant role in shaping these ecosystems. PMID:20133670

  2. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  3. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    NASA Astrophysics Data System (ADS)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-03-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  4. Process- and controller-adaptations determine the physiological effects of cold acclimation.

    PubMed

    Werner, Jürgen

    2008-09-01

    Experimental results on physiological effects of cold adaptation seem confusing and apparently incompatible with one another. This paper will explain that a substantial part of such a variety of results may be deduced from a common functional concept. A core/shell treatment ("model") of the thermoregulatory system is used with mean body temperature as the controlled variable. Adaptation, as a higher control level, is introduced into the system. Due to persistent stressors, either the (heat transfer) process or the controller properties (parameters) are adjusted (or both). It is convenient to call the one "process adaptation" and the other "controller adaptation". The most commonly demonstrated effect of autonomic cold acclimation is a change in the controller threshold. The analysis shows that this necessarily means a lowering of body temperature because of a lowered metabolic rate. This explains experimental results on both Europeans in the climatic chamber and Australian Aborigines in a natural environment. Exclusive autonomic process adaptation occurs in the form of a better insulation. The analysis explains why the post-adaptive steady-state can only be achieved, if the controller system reduces metabolism and why in spite of this the new state is inevitably characterized by a rise in body temperature. If both process and controller adaptations are simultaneously present, there may be not any change of body temperature at all, e.g., as demonstrated in animal experiments. Whether this kind of adaptation delivers a decrease, an increase or no change of mean body temperature, depends on the proportion of process and controller adaptation. PMID:18026979

  5. Effects of Crowding and Attention on High-Levels of Motion Processing and Motion Adaptation

    PubMed Central

    Pavan, Andrea; Greenlee, Mark W.

    2015-01-01

    The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e., phantom MAE). In the present study we used global rotating patterns to measure the strength of the conventional and phantom MAEs in crowded and non-crowded conditions, and when attention was directed to the adapting stimulus and when it was diverted away from the adapting stimulus. The results show that: (i) the phantom MAE is weaker than the conventional MAE, for both non-crowded and crowded conditions, and when attention was focused on the adapting stimulus and when it was diverted from it, (ii) conventional and phantom MAEs in the crowded condition are weaker than in the non-crowded condition. Analysis conducted to assess the effect of crowding on high-level of motion adaptation suggests that crowding is likely to affect the awareness of the adapting stimulus rather than degrading its sensory representation, (iii) for high-level of motion processing the attentional manipulation does not affect the strength of either conventional or phantom MAEs, neither in the non-crowded nor in the crowded conditions. These results suggest that high-level MAEs do not depend on attention and that at high-level of motion adaptation the effects of crowding are not modulated by attention. PMID:25615577

  6. Adaptive Memory: The Evolutionary Significance of Survival Processing.

    PubMed

    Nairne, James S; Pandeirada, Josefa N S

    2016-07-01

    A few seconds of survival processing, during which people assess the relevance of information to a survival situation, produces particularly good retention. One interpretation of this benefit is that our memory systems are optimized to process and retain fitness-relevant information. Such a "tuning" may exist, in part, because our memory systems were shaped by natural selection, using a fitness-based criterion. However, recent research suggests that traditional mnemonic processes, such as elaborative processing, may play an important role in producing the empirical benefit. Boundary conditions have been demonstrated as well, leading some to dismiss evolutionary interpretations of the phenomenon. In this article, we discuss the current state of the evolutionary account and provide a general framework for evaluating evolutionary and purportedly nonevolutionary interpretations of mnemonic phenomena. We suggest that survival processing effects are best viewed within the context of a general survival optimization system, designed by nature to help organisms deal with survival challenges. An important component of survival optimization is the ability to simulate activities that help to prevent or escape from future threats which, in turn, depends in an important way on accurate retrospective remembering of survival-relevant information. PMID:27474137

  7. Fabrication of microlens arrays on a glass substrate by roll-to-roll process with PDMS mold

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Nying; Su, Guo-Dung J.

    2009-08-01

    This paper presents a roll-to-roll method to fabricate microlens arrays on a glass substrate by using a cost-effective PDMS (Polydimethylsiloxane) mold. We fabricated microlens arrays mold, which was made by photoresist(AZ4620), on the silicon substrate by thermal reflow process, and transferred the pattern to PDMS film. Roll-to-roll system is a standard printing process whose roller is made of acrylic cylinder surrounded with the PDMS mold. UV resin was chosen to be the material to make microlens in rolling process with UV light curing. We investigated the quality of microlens arrays by changing the parameters, such as embossing pressure and rolling speed, to ensure good quality of microlens arrays.

  8. A multilevel examination of the relationships among training outcomes, mediating regulatory processes, and adaptive performance.

    PubMed

    Chen, Gilad; Thomas, Brian; Wallace, J Craig

    2005-09-01

    This study examined whether cognitive, affective-motivational, and behavioral training outcomes relate to posttraining regulatory processes and adaptive performance similarly at the individual and team levels of analysis. Longitudinal data were collected from 156 individuals composing 78 teams who were trained on and then performed a simulated flight task. Results showed that posttraining regulation processes related similarly to adaptive performance across levels. Also, regulation processes fully mediated the influences of self- and collective efficacy beliefs on individual and team adaptive performance. Finally, knowledge and skill more strongly and directly related to adaptive performance at the individual than the team level of analysis. Implications to theory and practice, limitations, and future directions are discussed. PMID:16162057

  9. Sub-threshold signal processing in arrays of non-identical nanostructures.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafé, Salvador

    2011-10-28

    Weak input signals are routinely processed by molecular-scaled biological networks composed of non-identical units that operate correctly in a noisy environment. In order to show that artificial nanostructures can mimic this behavior, we explore theoretically noise-assisted signal processing in arrays of metallic nanoparticles functionalized with organic ligands that act as tunneling junctions connecting the nanoparticle to the external electrodes. The electronic transfer through the nanostructure is based on the Coulomb blockade and tunneling effects. Because of the fabrication uncertainties, these nanostructures are expected to show a high variability in their physical characteristics and a diversity-induced static noise should be considered together with the dynamic noise caused by thermal fluctuations. This static noise originates from the hardware variability and produces fluctuations in the threshold potential of the individual nanoparticles arranged in a parallel array. The correlation between different input (potential) and output (current) signals in the array is analyzed as a function of temperature, applied voltage, and the variability in the electrical properties of the nanostructures. Extensive kinetic Monte Carlo simulations with nanostructures whose basic properties have been demonstrated experimentally show that variability can enhance the correlation, even for the case of weak signals and high variability, provided that the signal is processed by a sufficiently high number of nanostructures. Moderate redundancy permits us not only to minimize the adverse effects of the hardware variability but also to take advantage of the nanoparticles' threshold fluctuations to increase the detection range at low temperatures. This conclusion holds for the average behavior of a moderately large statistical ensemble of non-identical nanostructures processing different types of input signals and suggests that variability could be beneficial for signal processing

  10. Post-Processing of the Full Matrix of Ultrasonic Transmit-Receive Array Data for Guided Wave Pipe Inspection

    NASA Astrophysics Data System (ADS)

    Velichko, A.; Wilcox, P. D.

    2009-03-01

    The paper describes a method for processing data from a guided wave transducer array on a pipe. The raw data set from such an array contains the full matrix of time-domain signals from each transmitter-receiver combination. It is shown that for certain configurations of an array the total focusing method can be applied which allows the array to be focused at every point on a pipe surface in both transmission and reception. The effect of array configuration parameters on the sensitivity of the proposed method to the random and coherent noise is discussed. Experimental results are presented using electromagnetic acoustic transducers (EMAT) for exciting and detecting the S0 Lamb wave mode in a 12 inch steel pipe at 200 kHz excitation frequency. The results show that using the imaging algorithm a 2-mm-diameter (0.08 wavelength) half-thickness hole can be detected.

  11. The Adapted Dance Process: Planning, Partnering, and Performing

    ERIC Educational Resources Information Center

    Block, Betty A.; Johnson, Peggy V.

    2011-01-01

    This article contains specific planning, partnering, and performing techniques for fully integrating dancers with special needs into a dance pedagogy program. Each aspect is discussed within the context of the domains of learning. Fundamental partnering strategies are related to each domain as part of the integration process. The authors recommend…

  12. Adaptive healthcare processes for personalized emergency clinical pathways.

    PubMed

    Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G

    2014-01-01

    Pre-hospital and in-hospital emergency healthcare delivery involves a variety of activities and people that should be coordinated in order effectively to create an emergency care plan. Emergency care provided by emergency healthcare professionals can be improved by personalized emergency clinical pathways that are instances of relevant emergency clinical guidelines based on emergency case needs as well as on ambulance and hospital resource availability, while also enabling better resource use. Business Process Management Systems (BPMSs) in conjunction with semantic technologies can be used to support personalized emergency clinical pathways by incorporating clinical guidelines logic into the emergency healthcare processes at run-time according to emergency care context information (current emergency case and resource information). On these grounds, a framework is proposed that uses ontology to model knowledge on emergency case medical history, on healthcare resource availability, on relevant clinical guidelines and on process logic; this is inferred to result in the most suitable process model for the case, in line with relevant clinical guidelines. PMID:25160219

  13. Computer simulation program is adaptable to industrial processes

    NASA Technical Reports Server (NTRS)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  14. Phase velocity tomography of surface waves using ambient noise cross correlation and array processing

    NASA Astrophysics Data System (ADS)

    Boué, Pierre; Roux, Philippe; Campillo, Michel; Briand, Xavier

    2014-01-01

    Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross correlations to extract a particular wave component of the reconstructed signals. We focus here on the extraction of the surface waves to measure phase velocity variations with great accuracy. DBF acts as a spatial filter between two distant subarrays after cross correlation of the wavefield between each single receiver pair. During the DBF process, horizontal slowness and azimuth are used to select the wavefront on both subarray sides. DBF increases the signal-to-noise ratio, which improves the extraction of the dispersive wave packets. This combination of cross correlation and DBF is used on the Transportable Array (USArray), for the central U.S. region. A standard model of surface wave propagation is constructed from a combination of the DBF and cross correlations at different offsets and for different frequency bands. The perturbation (phase shift) between each beam and the standard model is inverted. High-resolution maps of the phase velocity of Rayleigh and Love waves are then constructed. Finally, the addition of azimuthal information provided by DBF is discussed, to construct curved rays that replace the classical great-circle path assumption.

  15. Adaptive control technique for accelerators using digital signal processing

    SciTech Connect

    Eaton, L.; Jachim, S.; Natter, E.

    1987-01-01

    The use of present Digital Signal Processing (DSP) techniques can drastically reduce the residual rf amplitude and phase error in an accelerating rf cavity. Accelerator beam loading contributes greatly to this residual error, and the low-level rf field control loops cannot completely absorb the fast transient of the error. A feedforward technique using DSP is required to maintain the very stringent rf field amplitude and phase specifications. 7 refs.

  16. The influence of negative stimulus features on conflict adaption: evidence from fluency of processing.

    PubMed

    Fritz, Julia; Fischer, Rico; Dreisbach, Gesine

    2015-01-01

    Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e., the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast). Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed. PMID:25767453

  17. The influence of negative stimulus features on conflict adaption: evidence from fluency of processing

    PubMed Central

    Fritz, Julia; Fischer, Rico; Dreisbach, Gesine

    2015-01-01

    Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e., the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast). Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed. PMID:25767453

  18. Correlation of lattice defects and thermal processing in the crystallization of titania nanotube arrays

    NASA Astrophysics Data System (ADS)

    Hosseinpour, Pegah M.; Yung, Daniel; Panaitescu, Eugen; Heiman, Don; Menon, Latika; Budil, David; Lewis, Laura H.

    2014-12-01

    Titania nanotubes have the potential to be employed in a wide range of energy-related applications such as solar energy-harvesting devices and hydrogen production. As the functionality of titania nanostructures is critically affected by their morphology and crystallinity, it is necessary to understand and control these factors in order to engineer useful materials for green applications. In this study, electrochemically-synthesized titania nanotube arrays were thermally processed in inert and reducing environments to isolate the role of post-synthesis processing conditions on the crystallization behavior, electronic structure and morphology development in titania nanotubes, correlated with the nanotube functionality. Structural and calorimetric studies revealed that as-synthesized amorphous nanotubes crystallize to form the anatase structure in a three-stage process that is facilitated by the creation of structural defects. It is concluded that processing in a reducing gas atmosphere versus in an inert environment provides a larger unit cell volume and a higher concentration of Ti3+ associated with oxygen vacancies, thereby reducing the activation energy of crystallization. Further, post-synthesis annealing in either reducing or inert atmospheres produces pronounced morphological changes, confirming that the nanotube arrays thermally transform into a porous morphology consisting of a fragmented tubular architecture surrounded by a network of connected nanoparticles. This study links explicit data concerning morphology, crystallization and defects, and shows that the annealing gas environment determines the details of the crystal structure, the electronic structure and the morphology of titania nanotubes. These factors, in turn, impact the charge transport and consequently the functionality of these nanotubes as photocatalysts.

  19. [Peculiarities of adaptive hemodynamic processes during high-frequency jet lung ventilation].

    PubMed

    Zislin, B D; Astakhov, A A; Pankov, N E; Kontorovich, M B

    2009-01-01

    This study concerns poorly known features of adaptive hemodynamic reactions of the heart pump function during traditional and high-frequency jet lung ventilation. Spectral analysis of slow-wave oscillations of stroke volume and left ventricular diastolic filling wave in 36 patients with craniocerebral injury and acute cerebral insufficiency showed that beneficial adaptive reactions were realized through a rise in the general spectrum power and entropy. High-frequency jet lung ventilation ensured better effect on the adaptive processes than the traditional technique. PMID:19642544

  20. A New Blind Adaptive Array Antenna Based on CMA Criteria for M-Ary/SS Signals Suitable for Software Defined Radio Architecture

    NASA Astrophysics Data System (ADS)

    Kozuma, Miho; Sasaki, Atsushi; Kamiya, Yukihiro; Fujii, Takeo; Umebayashi, Kenta; Suzuki, Yasuo

    M-ary/SS is a version of Direct Sequence/Spread Spectrum (DS/SS) aiming to improve the spectral efficiency employing orthogonal codes. However, due to the auto-correlation property of the orthogonal codes, it is impossible to detect the symbol timing by observing correlator outputs. Therefore, conventionally, a preamble has been inserted in M-ary/SS, signals. In this paper, we propose a new blind adaptive array antenna for M-ary/SS systems that combines signals over the space axis without any preambles. It is surely an innovative approach for M-ary/SS. The performance is investigated through computer simulations.

  1. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing

  2. [Adaptability of sweet corn ears to a frozen process].

    PubMed

    Ramírez Matheus, Alejandra O; Martínez, Norelkys Maribel; de Bertorelli, Ligia O; De Venanzi, Frank

    2004-12-01

    The effects of frozen condition on the quality of three sweet corn ears (2038, 2010, 2004) and the pattern (Bonanza), were evaluated. Biometrics characteristics like ear size, ear diameter, row and kernel deep were measured as well as chemical and physical measurement in fresh and frozen states. The corn ears were frozen at -95 degrees C by 7 minutes. The yield and stability of the frozen ears were evaluated at 45 and 90 days of frozen storage (-18 degrees C). The average commercial yield as frozen corn ear for all the hybrids was 54.2%. The industry has a similar value range of 48% to 54%. The ear size average was 21.57 cm, row number was 15, ear diameter 45.54 mm and the kernel corn deep was 8.57 mm. All these measurements were found not different from commercial values found for the industry. All corn samples evaluated showed good stability despites the frozen processing and storage. Hybrid 2038 ranked higher in quality. PMID:15969270

  3. Interacting Adaptive Processes with Different Timescales Underlie Short-Term Motor Learning

    PubMed Central

    Ghazizadeh, Ali; Shadmehr, Reza

    2006-01-01

    Multiple processes may contribute to motor skill acquisition, but it is thought that many of these processes require sleep or the passage of long periods of time ranging from several hours to many days or weeks. Here we demonstrate that within a timescale of minutes, two distinct fast-acting processes drive motor adaptation. One process responds weakly to error but retains information well, whereas the other responds strongly but has poor retention. This two-state learning system makes the surprising prediction of spontaneous recovery (or adaptation rebound) if error feedback is clamped at zero following an adaptation-extinction training episode. We used a novel paradigm to experimentally confirm this prediction in human motor learning of reaching, and we show that the interaction between the learning processes in this simple two-state system provides a unifying explanation for several different, apparently unrelated, phenomena in motor adaptation including savings, anterograde interference, spontaneous recovery, and rapid unlearning. Our results suggest that motor adaptation depends on at least two distinct neural systems that have different sensitivity to error and retain information at different rates. PMID:16700627

  4. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  5. A laser-assisted process to produce patterned growth of vertically aligned nanowire arrays for monolithic microwave integrated devices

    NASA Astrophysics Data System (ADS)

    Van Kerckhoven, Vivien; Piraux, Luc; Huynen, Isabelle

    2016-06-01

    An experimental process for the fabrication of microwave devices made of nanowire arrays embedded in a dielectric template is presented. A pulse laser process is used to produce a patterned surface mask on alumina templates, defining precisely the wire growing areas during electroplating. This technique makes it possible to finely position multiple nanowire arrays in the template, as well as produce large areas and complex structures, combining transmission line sections with various nanowire heights. The efficiency of this process is demonstrated through the realisation of a microstrip electromagnetic band-gap filter and a substrate-integrated waveguide.

  6. A laser-assisted process to produce patterned growth of vertically aligned nanowire arrays for monolithic microwave integrated devices.

    PubMed

    Kerckhoven, Vivien Van; Piraux, Luc; Huynen, Isabelle

    2016-06-10

    An experimental process for the fabrication of microwave devices made of nanowire arrays embedded in a dielectric template is presented. A pulse laser process is used to produce a patterned surface mask on alumina templates, defining precisely the wire growing areas during electroplating. This technique makes it possible to finely position multiple nanowire arrays in the template, as well as produce large areas and complex structures, combining transmission line sections with various nanowire heights. The efficiency of this process is demonstrated through the realisation of a microstrip electromagnetic band-gap filter and a substrate-integrated waveguide. PMID:27138863

  7. Real-time atmospheric imaging and processing with hybrid adaptive optics and hardware accelerated lucky-region fusion (LRF) algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jony Jiang; Carhart, Gary W.; Beresnev, Leonid A.; Aubailly, Mathieu; Jackson, Christopher R.; Ejzak, Garrett; Kiamilev, Fouad E.

    2014-09-01

    Atmospheric turbulences can significantly deteriorate the performance of long-range conventional imaging systems and create difficulties for target identification and recognition. Our in-house developed adaptive optics (AO) system, which contains high-performance deformable mirrors (DMs) and the fast stochastic parallel gradient decent (SPGD) control mechanism, allows effective compensation of such turbulence-induced wavefront aberrations and result in significant improvement on the image quality. In addition, we developed advanced digital synthetic imaging and processing technique, "lucky-region" fusion (LRF), to mitigate the image degradation over large field-of-view (FOV). The LRF algorithm extracts sharp regions from each image obtained from a series of short exposure frames and fuses them into a final improved image. We further implemented such algorithm into a VIRTEX-7 field programmable gate array (FPGA) and achieved real-time video processing. Experiments were performed by combining both AO and hardware implemented LRF processing technique over a near-horizontal 2.3km atmospheric propagation path. Our approach can also generate a universal real-time imaging and processing system with a general camera link input, a user controller interface, and a DVI video output.

  8. A Field-Programmable Analog Array Development Platform for Vestibular Prosthesis Signal Processing

    PubMed Central

    Töreyin, Hakan; Bhatti, Pamela

    2015-01-01

    We report on a vestibular prosthesis signal processor realized using an experimental field programmable analog array (FPAA). Completing signal processing functions in the analog domain, the processor is designed to help replace a malfunctioning inner ear sensory organ, a semicircular canal. Relying on angular head motion detected by an inertial sensor, the signal processor maps angular velocity into meaningful control signals to drive a current stimulator. To demonstrate biphasic pulse control a 1 kΩ resistive load was placed across an H-bridge circuit. When connected to a 2.4 V supply, a biphasic current of 100 μA was maintained at stimulation frequencies from 50–350 Hz, pulsewidths from 25–400 μsec, and interphase gaps ranging from 25–250 sec. PMID:23853331

  9. Statistical Analysis of the Performance of MDL Enumeration for Multiple-Missed Detection in Array Processing

    PubMed Central

    Du, Fei; Li, Yibo; Jin, Shijiu

    2015-01-01

    An accurate performance analysis on the MDL criterion for source enumeration in array processing is presented in this paper. The enumeration results of MDL can be predicted precisely by the proposed procedure via the statistical analysis of the sample eigenvalues, whose distributive properties are investigated with the consideration of their interactions. A novel approach is also developed for the performance evaluation when the source number is underestimated by a number greater than one, which is denoted as “multiple-missed detection”, and the probability of a specific underestimated source number can be estimated by ratio distribution analysis. Simulation results are included to demonstrate the superiority of the presented method over available results and confirm the ability of the proposed approach to perform multiple-missed detection analysis. PMID:26295232

  10. An Eye-adapted Beamforming for Axial B-scans Free from Crystalline Lens Aberration: In vitro and ex vivo Results with a 20 MHz Linear Array

    NASA Astrophysics Data System (ADS)

    Matéo, Tony; Mofid, Yassine; Grégoire, Jean-Marc; Ossant, Frédéric

    In ophtalmic ultrasonography, axial B-scans are seriously deteriorated owing to the presence of the crystalline lens. This strongly aberrating medium affects both spatial and contrast resolution and causes important distortions. To deal with this issue, an adapted beamforming (BF) has been developed and experimented with a 20 MHz linear array working with a custom US research scanner. The adapted BF computes focusing delays that compensate for crystalline phase aberration, including refraction effects. This BF was tested in vitro by imaging a wire phantom through an eye phantom consisting of a synthetic gelatin lens, shaped according to the unaccommodated state of an adult human crystalline lens, anatomically set up in an appropriate liquid (turpentine) to approach the in vivo velocity ratio. Both image quality and fidelity from the adapted BF were assessed and compared with conventional delay-and-sum BF over the aberrating medium. Results showed 2-fold improvement of the lateral resolution, greater sensitivity and 90% reduction of the spatial error (from 758 μm to 76 μm) with adapted BF compared to conventional BF. Finally, promising first ex vivo axial B-scans of a human eye are presented.

  11. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  12. Remote online process measurements by a fiber optic diode array spectrometer

    SciTech Connect

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system.

  13. The process of adapting a universal dating abuse prevention program to adolescents exposed to domestic violence.

    PubMed

    Foshee, Vangie A; Dixon, Kimberly S; Ennett, Susan T; Moracco, Kathryn E; Bowling, J Michael; Chang, Ling-Yin; Moss, Jennifer L

    2015-07-01

    Adolescents exposed to domestic violence are at increased risk of dating abuse, yet no evaluated dating abuse prevention programs have been designed specifically for this high-risk population. This article describes the process of adapting Families for Safe Dates (FSD), an evidenced-based universal dating abuse prevention program, to this high-risk population, including conducting 12 focus groups and 107 interviews with the target audience. FSD includes six booklets of dating abuse prevention information, and activities for parents and adolescents to do together at home. We adapted FSD for mothers who were victims of domestic violence, but who no longer lived with the abuser, to do with their adolescents who had been exposed to the violence. Through the adaptation process, we learned that families liked the program structure and valued being offered the program and that some of our initial assumptions about this population were incorrect. We identified practices and beliefs of mother victims and attributes of these adolescents that might increase their risk of dating abuse that we had not previously considered. In addition, we learned that some of the content of the original program generated negative family interactions for some. The findings demonstrate the utility of using a careful process to adapt evidence-based interventions (EBIs) to cultural sub-groups, particularly the importance of obtaining feedback on the program from the target audience. Others can follow this process to adapt EBIs to groups other than the ones for which the original EBI was designed. PMID:25287405

  14. Prism adaptation reverses the local processing bias in patients with right temporo-parietal junction lesions

    PubMed Central

    Rafal, Robert D.; List, Alexandra

    2009-01-01

    Lesions to the right temporo-parietal cortex commonly result in hemispatial neglect. Lesions to the same area are also associated with hyperattention to local details of a scene and difficulty perceiving the global structure. This local processing bias is an important factor contributing to neglect and may contribute to the higher prevalence of the disorder following right compared with left hemisphere strokes. In recent years, visuomotor adaptation to rightward-shifting prisms has been introduced as a promising treatment for hemispatial neglect. Explanations for these improvements have generally described a leftward realignment of attention, however, the present investigation provides evidence that prism adaptation reduces the local processing bias. Five patients with right temporal-parietal junction lesions were asked to identify the global or local levels of hierarchical figures before and after visuomotor adaptation to rightward-shifting prisms. Prior to prism adaptation the patients had difficulty ignoring the local elements when identifying the global component. Following prism adaptation, however, this pattern was reversed, with greater global interference during local level identification. The results suggest that prism adaptation may improve non-spatially lateralized deficits that contribute to the neglect syndrome. PMID:19416951

  15. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  16. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  17. Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.

    PubMed

    Candy, J V

    2015-09-01

    The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements. PMID:26428765

  18. Investigation on fabrication process of dissolving microneedle arrays to improve effective needle drug distribution.

    PubMed

    Wang, Qingqing; Yao, Gangtao; Dong, Pin; Gong, Zihua; Li, Ge; Zhang, Kejian; Wu, Chuanbin

    2015-01-23

    The dissolving microneedle array (DMNA) offers a novel potential approach for transdermal delivery of biological macromolecular drugs and vaccines, because it can be as efficient as hypodermic injection and as safe and patient compliant as conventional transdermal delivery. However, effective needle drug distribution is the main challenge for clinical application of DMNA. This study focused on the mechanism and control of drug diffusion inside DMNA during the fabrication process in order to improve the drug delivery efficiency. The needle drug loading proportion (NDP) in DMNAs was measured to determine the influences of drug concentration gradient, needle drying step, excipients, and solvent of the base solution on drug diffusion and distribution. The results showed that the evaporation of base solvent was the key factor determining NDP. Slow evaporation of water from the base led to gradual increase of viscosity, and an approximate drug concentration equilibrium was built between the needle and base portions, resulting in NDP as low as about 6%. When highly volatile ethanol was used as the base solvent, the viscosity in the base rose quickly, resulting in NDP more than 90%. Ethanol as base solvent did not impact the insertion capability of DMNAs, but greatly increased the in vitro drug release and transdermal delivery from DMNAs. Furthermore, the drug diffusion process during DMNA fabrication was thoroughly investigated for the first time, and the outcomes can be applied to most two-step molding processes and optimization of the DMNA fabrication. PMID:25446513

  19. Optical characteristics of a PbS detector array spectrograph for online process monitoring

    NASA Astrophysics Data System (ADS)

    Kansakoski, Markku; Malinen, Jouko

    1999-02-01

    The use of optical spectroscopic methods for quantitative composition measurements in the field of process control is increasing rapidly. Various optical configurations are already in use or are being developed, with the aim of accomplishing the wavelength selectivity needed in spectroscopic measurement. The development of compact and rugged spectrometers for process monitoring applications, has been one of the major tasks for the optical measurements research team at VTT Electronics. A new PbS detector array- based spectrometer unit has now been developed for use in process analyzers, providing 24-wavelengths ranging from 1350 to 2400 nm. Extensive testing has been carried out to examine the performance of the developed units, concerning performance in normal operating conditions, characteristics vs. temperature, unit-to-unit variation and preliminary environmental testing. The main performance characteristics of the developed spectrometer unit include stable output, a band center wavelength (CW) unit-to-unit tracking better than -+ 1 nm, a band CW draft vs. operating temperature less than 1.8 nm in the temperature range +10 degree(s)C...+50 degree(s)C, and optical stray light below 0.1 percent. The combination of technical performance, small size, rugged construction, and potential for medium manufacturing cost (4000-5000 dollars in quantities) make the developed unit a promising alternative in developing competitive high-performance analyzers for various NIR applications.

  20. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    NASA Astrophysics Data System (ADS)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  1. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    SciTech Connect

    Ferm, Lars Hellander, Andreas Loetstedt, Per

    2010-01-20

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  2. Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhu, Daiyin; Zhu, Zhaoda

    2010-12-01

    The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI) technique is a simplified implementation of space-time adaptive processing (STAP), which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA) STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.

  3. A mixed signal ECG processing platform with an adaptive sampling ADC for portable monitoring applications.

    PubMed

    Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat

    2011-01-01

    This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption. PMID:22254775

  4. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  5. Investigation of Proposed Process Sequence for the Array Automated Assembly Task, Phase 2. [low cost silicon solar array fabrication

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Bunyan, S.; Pepe, A.

    1979-01-01

    The technological readiness of the proposed process sequence was reviewed. Process steps evaluated include: (1) plasma etching to establish a standard surface; (2) forming junctions by diffusion from an N-type polymeric spray-on source; (3) forming a p+ back contact by firing a screen printed aluminum paste; (4) forming screen printed front contacts after cleaning the back aluminum and removing the diffusion oxide; (5) cleaning the junction by a laser scribe operation; (6) forming an antireflection coating by baking a polymeric spray-on film; (7) ultrasonically tin padding the cells; and (8) assembling cell strings into solar circuits using ethylene vinyl acetate as an encapsulant and laminating medium.

  6. Biologically inspired large scale chemical sensor arrays and embedded data processing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  7. Advancements in fabrication process of microelectrode array for a retinal prosthesis using Liquid Crystal Polymer (LCP).

    PubMed

    Jeong, Joonsoo; Shin, Soowon; Lee, Geun Jae; Gwon, Tae Mok; Park, Jeong Hoan; Kim, Sung June

    2013-01-01

    Liquid Crystal Polymer (LCP) has been considered as an alternative biomaterial for implantable biomedical devices primarily for its low moisture absorption rate compared with conventional polymers such as polyimide, parylene and silicone elastomers. A novel retinal prosthetic device based on monolithic encapsulation of LCP is being developed in which entire neural stimulation circuitries are integrated into a thin and eye-conformable structure. Micromachining techniques for fabrication of a LCP retinal electrode array have been previously reported. In this research, however, for being used as a part of the LCP-based retinal implant, we developed advanced fabrication process of LCP retinal electrode through new approaches such as electroplating and laser-machining in order to achieve higher mechanical robustness, long-term reliability and flexibility. Thickened metal tracks could contribute to higher mechanical strength as well as higher long-term reliability when combined with laser-ablation process by allowing high-pressure lamination. Laser-thinning technique could improve the flexibility of LCP electrode. PMID:24110931

  8. Access to Learning for Handicapped Children: A Handbook on the Instructional Adaptation Process. Field Test Version.

    ERIC Educational Resources Information Center

    Changar, Jerilynn; And Others

    The manual describes the results of a 36 month project to determine ways to modify existing curricula to meet the needs of special needs students in the mainstream. The handbook is designed in the main for administrators and facilitators as well as for teacher-adaptors. Each of eight steps in the adaptation process is broken down according to…

  9. Chinese Students and Scholars in the U.S.: An Intercultural Adaptation Process.

    ERIC Educational Resources Information Center

    Zhong, Mei

    An ethnographic study examined the culture of the Chinese students and scholars in America with a specific focus on their experiences in the cultural adaptation process. Subjects were three Chinese nationals (one female and two males) living in the area of a large midwestern university. Subjects were interviewed for about an hour each, with…

  10. Cognitive Process Development as Measured by an Adapted Version of Wechsler's Similarities Test

    ERIC Educational Resources Information Center

    Rozencwajg, Paulette

    2007-01-01

    This paper studies the development of taxonomic processing as measured by an adapted version of the Wechsler Similarities subtest, which distinguishes between categorization of concrete and abstract words. Two factors--age and concreteness--are also tested by a recall task. The results show an age-related increase in taxonomic categorization,…

  11. Factors associated with the process of adaptation among Pakistani adolescent females living in United States.

    PubMed

    Khuwaja, Salma A; Selwyn, Beatrice J; Mgbere, Osaro; Khuwaja, Alam; Kapadia, Asha; McCurdy, Sheryl; Hsu, Chiehwen E

    2013-04-01

    This study explored post-migration experiences of recently migrated Pakistani Muslim adolescent females residing in the United States. In-depth, semi-structured interviews were conducted with thirty Pakistani Muslim adolescent females between the ages of 15 and 18 years living with their families in Houston, Texas. Data obtained from the interviews were evaluated using discourse analysis to identify major reoccurring themes. Participants discussed factors associated with the process of adaptation to the American culture. The results revealed that the main factors associated with adaptation process included positive motivation for migration, family bonding, social support networks, inter-familial communication, aspiration of adolescents to learn other cultures, availability of English-as-second-language programs, participation in community rebuilding activities, and faith practices, English proficiency, peer pressure, and inter-generational conflicts. This study provided much needed information on factors associated with adaptation process of Pakistani Muslim adolescent females in the United States. The results have important implications for improving the adaptation process of this group and offer potential directions for intervention and counseling services. PMID:22940911

  12. A fuzzy model based adaptive PID controller design for nonlinear and uncertain processes.

    PubMed

    Savran, Aydogan; Kahraman, Gokalp

    2014-03-01

    We develop a novel adaptive tuning method for classical proportional-integral-derivative (PID) controller to control nonlinear processes to adjust PID gains, a problem which is very difficult to overcome in the classical PID controllers. By incorporating classical PID control, which is well-known in industry, to the control of nonlinear processes, we introduce a method which can readily be used by the industry. In this method, controller design does not require a first principal model of the process which is usually very difficult to obtain. Instead, it depends on a fuzzy process model which is constructed from the measured input-output data of the process. A soft limiter is used to impose industrial limits on the control input. The performance of the system is successfully tested on the bioreactor, a highly nonlinear process involving instabilities. Several tests showed the method's success in tracking, robustness to noise, and adaptation properties. We as well compared our system's performance to those of a plant with altered parameters with measurement noise, and obtained less ringing and better tracking. To conclude, we present a novel adaptive control method that is built upon the well-known PID architecture that successfully controls highly nonlinear industrial processes, even under conditions such as strong parameter variations, noise, and instabilities. PMID:24140160

  13. Subspace array processing using spatial time-frequency distributions: applications for denoising structural echoes of elastic targets.

    PubMed

    Sabra, Karim G; Anderson, Shaun D

    2014-05-01

    Structural echoes of underwater elastic targets, used for detection and classification purposes, can be highly localized in the time-frequency domain and can be aspect-dependent. Hence such structural echoes recorded along a distributed (synthetic) aperture, e.g., using a moving receiver platform, would not meet the stationarity and multiple snapshots requirements of common subspace array processing methods used for denoising array data based on their estimated covariance matrix. To address this issue, this article introduces a subspace array processing method based on the space-time-frequency distribution (STFD) of single-snapshots of non-stationary signals. This STFD is obtained by computing Cohen's class time-frequency distributions between all pairwise combination of the recorded signals along an arbitrary aperture array. This STFD is interpreted as a generalized array covariance matrix which automatically accounts for the inherent coherence across the time-frequency plane of the received nonstationary echoes emanating from the same target. Hence, identifying the signal's subspace from the eigenstructure of this STFD provides a means for denoising these non-stationary structural echoes by spreading the clutter and noise power in the time-frequency domain; as demonstrated here numerically and experimentally using the structural echoes of a thin steel spherical shell measured along a synthetic aperture. PMID:24815264

  14. Simpler Adaptive Optics using a Single Device for Processing and Control

    NASA Astrophysics Data System (ADS)

    Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.

    The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a < 5 V power supply. AO systems using a single such device for all data processing and control may present a far simpler, cheaper, smaller and more efficient solution than existing systems. A novel AO system design based around a single, low-cost controller is presented. The objective is to determine the performance which can be achieved in terms of bandwidth and correction order, with a focus on optimisation and parallelisation of AO algorithms such as wavefront measurement and reconstruction. The AO system consists of a Shack-Hartmann wavefront sensor and a deformable mirror to correct light from a 1.8 m telescope for the purpose of imaging orbiting satellites. The

  15. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  16. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  17. Planarized process for resonant leaky-wave coupled phase-locked arrays of mid-IR quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Chang, C.-C.; Kirch, J. D.; Boyle, C.; Sigler, C.; Mawst, L. J.; Botez, D.; Zutter, B.; Buelow, P.; Schulte, K.; Kuech, T.; Earles, T.

    2015-03-01

    On-chip resonant leaky-wave coupling of quantum cascade lasers (QCLs) emitting at 8.36 μm has been realized by selective regrowth of interelement layers in curved trenches, defined by dry and wet etching. The fabricated structure provides large index steps (Δn = 0.10) between antiguided-array element and interelement regions. In-phase-mode operation to 5.5 W front-facet emitted power in a near-diffraction-limited far-field beam pattern, with 4.5 W in the main lobe, is demonstrated. A refined fabrication process has been developed to produce phased-locked antiguided arrays of QCLs with planar geometry. The main fabrication steps in this process include non-selective regrowth of Fe:InP in interelement trenches, defined by inductive-coupled plasma (ICP) etching, a chemical polishing (CP) step to planarize the surface, non-selective regrowth of interelement layers, ICP selective etching of interelement layers, and non-selective regrowth of InP cladding layer followed by another CP step to form the element regions. This new process results in planar InGaAs/InP interelement regions, which allows for significantly improved control over the array geometry and the dimensions of element and interelement regions. Such a planar process is highly desirable to realize shorter emitting wavelength (4.6 μm) arrays, where fabrication tolerance for single-mode operation are tighter compared to 8 μm-emitting devices.

  18. CMOS array of photodiodes with electronic processing for 3D optical reconstruction

    NASA Astrophysics Data System (ADS)

    Hornero, Gemma; Montane, Enric; Chapinal, Genis; Moreno, Mauricio; Herms, Atila

    2001-04-01

    It is well known that laser time-of-flight (TOF) and optical triangulation are the most useful optical techniques for distance measurements. The first one is more suitable for large distances, since for short range of distances high modulation frequencies of laser diodes (©200-500MHz) are needed. For these ranges, optical triangulation is simpler, as it is only necessary to read the projection of the laser point over a linear optical sensor without any laser modulation. Laser triangulation is based on the rotation of the object. This motion shifts the projected point over the linear sensor, resulting on 3D information, by means of the whole readout of the linear sensor in each angle position. On the other hand, a hybrid method of triangulation and TOF can be implemented. In this case, a synchronized scanning of a laser beam over the object results in different arrival times of light to each pixel. The 3D information is carried by these delays. Only a single readout of the linear sensor is needed. In this work we present the design of two different linear arrays of photodiodes in CMOS technology, the first one based on the Optical triangulation measurement and the second one based in this hybrid method (TFO). In contrast to PSD (Position Sensitive Device) and CCDs, CMOS technology can include, on the same chip, photodiodes, control and processing electronics, that in the other cases should be implemented with external microcontrollers.

  19. Array processing for RFID tag localization exploiting multi-frequency signals

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Li, Xin; Amin, Moeness G.

    2009-05-01

    RFID is an increasingly valuable business and technology tool for electronically identifying, locating, and tracking products, assets, and personnel. As a result, precise positioning and tracking of RFID tags and readers have received considerable attention from both academic and industrial communities. Finding the position of RFID tags is considered an important task in various real-time locating systems (RTLS). As such, numerous RFID localization products have been developed for various applications. The majority of RFID positioning systems is based on the fusion of pieces of relevant information, such as the range and the direction-of-arrival (DOA). For example, trilateration can determine the tag position by using the range information of the tag estimated from three or more spatially separated reader antennas. Triangulation is another method to locate RFID tags that use the direction-of-arrival (DOA) information estimated at multiple spatially separated locations. The RFID tag positions can also be determined through hybrid techniques that combine the range and DOA information. The focus of this paper to study the design and performance of the localization of passive RFID tags using array processing techniques in a multipath environment, and exploiting multi-frequency CW signals. The latter are used to decorrelate the coherent multipath signals for effective DOA estimation and for the purpose of accurate range estimation. Accordingly, the spatial and frequency dimensionalities are fully utilized for robust and accurate positioning of RFID tags.

  20. Solution-Processed Organic Thin-Film Transistor Array for Active-Matrix Organic Light-Emitting Diode

    NASA Astrophysics Data System (ADS)

    Harada, Chihiro; Hata, Takuya; Chuman, Takashi; Ishizuka, Shinichi; Yoshizawa, Atsushi

    2013-05-01

    We developed a 3-in. organic thin-film transistor (OTFT) array with an ink-jetted organic semiconductor. All layers except electrodes were fabricated by solution processes. The OTFT performed well without hysteresis, and the field-effect mobility in the saturation region was 0.45 cm2 V-1 s-1, the threshold voltage was 3.3 V, and the on/off current ratio was more than 106. We demonstrated a 3-in. active-matrix organic light-emitting diode (AMOLED) display driven by the OTFT array. The display could provide clear moving images. The peak luminance of the display was 170 cd/m2.

  1. Signal subspace analysis for decoherent processes during interferometric fiber-optic gyroscopes using synchronous adaptive filters.

    PubMed

    Li, Yongxiao; Wang, Zinan; Peng, Chao; Li, Zhengbin

    2014-10-10

    Conventional signal processing methods for improving the random walk coefficient and the bias stability of interferometric fiber-optic gyroscopes are usually implemented in one-dimension sequence. In this paper, as a comparison, we allocated synchronous adaptive filters with the calculations of correlations of multidimensional signals in the perspective of the signal subspace. First, two synchronous independent channels are obtained through quadrature demodulation. Next, synchronous adaptive filters were carried out in order to project the original channels to the high related error channels and the approximation channels. The error channel signals were then processed by principal component analysis for suppressing coherent noises. Finally, an optimal state estimation of these error channels and approximation channels based on the Kalman gain coefficient was operated. Experimental results show that this signal processing method improved the raw measurements' variance from 0.0630 [(°/h)2] to 0.0103 [(°/h)2]. PMID:25322393

  2. A new hypothesis: some metastases are the result of inflammatory processes by adapted cells, especially adapted immune cells at sites of inflammation

    PubMed Central

    Shahriyari, Leili

    2016-01-01

    There is an old hypothesis that metastasis is the result of migration of tumor cells from the tumor to a distant site. In this article, we propose another mechanism for metastasis, for cancers that are initiated at the site of chronic inflammation. We suggest that cells at the site of chronic inflammation might become adapted to the inflammatory process, and these adaptations may lead to the initiation of an inflammatory tumor. For example, in an inflammatory tumor immune cells might be adapted to send signals of proliferation or angiogenesis, and epithelial cells might be adapted to proliferation (like inactivation of tumor suppressor genes). Therefore, we hypothesize that metastasis could be the result of an inflammatory process by adapted cells, especially adapted immune cells at the site of inflammation, as well as the migration of tumor cells with the help of activated platelets, which travel between sites of inflammation.  If this hypothesis is correct, then any treatment causing necrotic cell death may not be a good solution. Because necrotic cells in the tumor micro-environment or anywhere in the body activate the immune system to initiate the inflammatory process, and the involvement of adapted immune cells in the inflammatory processes leads to the formation and progression of tumors. Adapted activated immune cells send more signals of proliferation and/or angiogenesis than normal cells. Moreover, if there were adapted epithelial cells, they would divide at a much higher rate in response to the proliferation signals than normal cells. Thus, not only would the tumor come back after the treatment, but it would also grow more aggressively. PMID:27158448

  3. Implementation of Joint Pre-FFT Adaptive Array Antenna and Post-FFT Space Diversity Combining for Mobile ISDB-T Receiver

    NASA Astrophysics Data System (ADS)

    Pham, Dang Hai; Gao, Jing; Tabata, Takanobu; Asato, Hirokazu; Hori, Satoshi; Wada, Tomohisha

    In our application targeted here, four on-glass antenna elements are set in an automobile to improve the reception quality of mobile ISDB-T receiver. With regard to the directional characteristics of each antenna, we propose and implement a joint Pre-FFT adaptive array antenna and Post-FFT space diversity combining (AAA-SDC) scheme for mobile ISDB-T receiver. By applying a joint hardware and software approach, a flexible platform is realized in which several system configuration schemes can be supported; the receiver can be reconfigured on the fly. Simulation results show that the AAA-SDC scheme drastically improves the performance of mobile ISDB-T receiver, especially in the region of large Doppler shift. The experimental results from a field test also confirm that the proposed AAA-SDC scheme successfully achieves an outstanding reception rate up to 100% while moving at the speed of 80km/h.

  4. Digital pixel CMOS focal plane array with on-chip multiply accumulate units for low-latency image processing

    NASA Astrophysics Data System (ADS)

    Little, Jeffrey W.; Tyrrell, Brian M.; D'Onofrio, Richard; Berger, Paul J.; Fernandez-Cull, Christy

    2014-06-01

    A digital pixel CMOS focal plane array has been developed to enable low latency implementations of image processing systems such as centroid trackers, Shack-Hartman wavefront sensors, and Fitts correlation trackers through the use of in-pixel digital signal processing (DSP) and generic parallel pipelined multiply accumulate (MAC) units. Light intensity digitization occurs at the pixel level, enabling in-pixel DSP and noiseless data transfer from the pixel array to the peripheral processing units. The pipelined processing of row and column image data prior to off chip readout reduces the required output bandwidth of the image sensor, thus reducing the latency of computations necessary to implement various image processing systems. Data volume reductions of over 80% lead to sub 10μs latency for completing various tracking and sensor algorithms. This paper details the architecture of the pixel-processing imager (PPI) and presents some initial results from a prototype device fabricated in a standard 65nm CMOS process hybridized to a commercial off-the-shelf short-wave infrared (SWIR) detector array.

  5. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  6. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  7. Sub-band processing for grating lobe disambiguation in sparse arrays

    NASA Astrophysics Data System (ADS)

    Hersey, Ryan K.; Culpepper, Edwin

    2014-06-01

    Combined synthetic aperture radar (SAR) and ground moving target indication (GMTI) radar modes simultaneously generate SAR and GMTI products from the same radar data. This hybrid mode provides the benefit of combined imaging and moving target displays as well as improved target recognition. However, the differing system, antenna, and waveform requirements between SAR and GMTI modes make implementing the hybrid mode challenging. The Air Force Research Laboratory (AFRL) Gotcha radar has collected wide-bandwidth, multi-channel data that can be used for both SAR and GMTI applications. The spatial channels on the Gotcha array are sparsely separated, which causes spatial grating lobes during the digital beamforming process. Grating lobes have little impact on SAR, which typically uses a single spatial channel. However, grating lobes have a large impact on GMTI, where spatial channels are used to mitigate clutter and estimate the target angle of arrival (AOA). The AOA ambiguity has a significant impact in the Gotcha data, where detections from the sidelobes and skirts of the mainlobe wrap back into the main scene causing a significant number of false alarms. This paper presents a sub-banding method to disambiguate grating lobes in the GMTI processing. This method divides the wideband SAR data into multiple frequency sub-bands. Since each sub-band has a different center frequency, the grating lobes for each sub-band appear at different angles. The method uses this variation to disambiguate target returns and places them at the correct angle of arrival (AOA). Results are presented using AFRL Gotcha radar data.

  8. Signal processing through a generalized module of adaptation and spatial sensing.

    PubMed

    Krishnan, J

    2009-07-01

    Signal transduction in many cellular processes is accompanied by the feature of adaptation, which allows certain key signalling components to respond to temporal and/or spatial variation of external signals, independent of the absolute value of the signal. We extend and formulate a more general module which accounts for robust temporal adaptation and spatial response. In this setting, we examine various aspects of spatial and temporal signalling, as well as the signalling consequences and restrictions imposed by virtue of adaptation. This module is able to exhibit a variety of behaviour in response to temporal, spatial and spatio-temporal inputs. We carefully examine the roles of various parameters in this module and how they affect signal processing and propagation. Overall, we demonstrate how a simple module can account for a range downstream responses to a variety of input signals, and how elucidating the downstream response of many cellular components in systems with such adaptive signalling can be consequently very non-trivial. PMID:19254728

  9. Three-dimensional region-based adaptive image processing techniques for volume visualization applications

    NASA Astrophysics Data System (ADS)

    de Deus Lopes, Roseli; Zuffo, Marcelo K.; Rangayyan, Rangaraj M.

    1996-04-01

    Recent advances in three-dimensional (3D) imaging techniques have expanded the scope of applications of volume visualization to many areas such as medical imaging, scientific visualization, robotic vision, and virtual reality. Advanced image filtering, enhancement, and analysis techniques are being developed in parallel in the field of digital image processing. Although the fields cited have many aspects in common, it appears that many of the latest developments in image processing are not being applied to the fullest extent possible in visualization. It is common to encounter the use of rather simple and elementary image pre- processing operations being used in visualization and 3D imaging applications. The purpose of this paper is to present an overview of selected topics from recent developments in adaptive image processing and demonstrate or suggest their applications in volume visualization. The techniques include adaptive noise removal; improvement of contrast and visibility of objects; space-variant deblurring and restoration; segmentation-based lossless coding for data compression; and perception-based measures for analysis, enhancement, and rendering. The techniques share the common base of identification of adaptive regions by region growing, which lends them a perceptual basis related to the human visual system. Preliminary results obtained with some of the techniques implemented so far are used to illustrate the concepts involved, and to indicate potential performance capabilities of the methods.

  10. Adapting School-Based Substance Use Prevention Curriculum Through Cultural Grounding: A Review and Exemplar of Adaptation Processes for Rural Schools

    PubMed Central

    Colby, Margaret; Hecht, Michael L.; Miller-Day, Michelle; Krieger, Janice L.; Syvertsen, Amy K.; Graham, John W.; Pettigrew, Jonathan

    2014-01-01

    A central challenge facing twenty-first century community-based researchers and prevention scientists is curriculum adaptation processes. While early prevention efforts sought to develop effective programs, taking programs to scale implies that they will be adapted, especially as programs are implemented with populations other than those with whom they were developed or tested. The principle of cultural grounding, which argues that health message adaptation should be informed by knowledge of the target population and by cultural insiders, provides a theoretical rational for cultural regrounding and presents an illustrative case of methods used to reground the keepin’ it REAL substance use prevention curriculum for a rural adolescent population. We argue that adaptation processes like those presented should be incorporated into the design and dissemination of prevention interventions. PMID:22961604

  11. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  12. A Low Processing Cost Adaptive Algorithm Identifying Nonlinear Unknown System with Piecewise Linear Curve

    NASA Astrophysics Data System (ADS)

    Fujii, Kensaku; Aoki, Ryo; Muneyasu, Mitsuji

    This paper proposes an adaptive algorithm for identifying unknown systems containing nonlinear amplitude characteristics. Usually, the nonlinearity is so small as to be negligible. However, in low cost systems, such as acoustic echo canceller using a small loudspeaker, the nonlinearity deteriorates the performance of the identification. Several methods preventing the deterioration, polynomial or Volterra series approximations, have been hence proposed and studied. However, the conventional methods require high processing cost. In this paper, we propose a method approximating the nonlinear characteristics with a piecewise linear curve and show using computer simulations that the performance can be extremely improved. The proposed method can also reduce the processing cost to only about twice that of the linear adaptive filter system.

  13. The Information Adaptive System - A demonstration of real-time onboard image processing

    NASA Technical Reports Server (NTRS)

    Thomas, G. L.; Carney, P. C.; Meredith, B. D.

    1983-01-01

    The Information Adaptive System (IAS) program has the objective to develop and demonstrate, at the brassboard level, an architecture which can be used to perform advanced signal procesing functions on board the spacecraft. Particular attention is given to the processing of high-speed multispectral imaging data in real-time, and the development of advanced technology which could be employed for future space applications. An IAS functional description is provided, and questions of radiometric correction are examined. Problems of data packetization are considered along with data selection, a distortion coefficient processor, an adaptive system controller, an image processing demonstration system, a sensor simulator and output data buffer, a test support and demonstration controller, and IAS demonstration operating modes.

  14. Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes.

    PubMed

    Khalifa, Abdulrahman; Meystre, Stéphane

    2015-12-01

    The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. PMID:26318122

  15. Developing Smart Seismic Arrays: A Simulation Environment, Observational Database, and Advanced Signal Processing

    SciTech Connect

    Harben, P E; Harris, D; Myers, S; Larsen, S; Wagoner, J; Trebes, J; Nelson, K

    2003-09-15

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3D finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic

  16. Mapping acoustic emissions from hydraulic fracture treatments using coherent array processing: Concept

    SciTech Connect

    Harris, D.B.; Sherwood, R.J.; Jarpe, S.P.; Harben, P.E.

    1991-09-01

    Hydraulic fracturing is a widely-used well completion technique for enhancing the recovery of gas and oil in low-permeability formations. Hydraulic fracturing consists of pumping fluids into a well under high pressure (1000--5000 psi) to wedge-open and extend a fracture into the producing formation. The fracture acts as a conduit for gas and oil to flow back to the well, significantly increasing communication with larger volumes of the producing formation. A considerable amount of research has been conducted on the use of acoustic (microseismic) emission to delineate fracture growth. The use of transient signals to map the location of discrete sites of emission along fractures has been the focus of most research on methods for delineating fractures. These methods depend upon timing the arrival of compressional (P) or shear (S) waves from discrete fracturing events at one or more clamped geophones in the treatment well or in adjacent monitoring wells. Using a propagation model, the arrival times are used to estimate the distance from each sensor to the fracturing event. Coherent processing methods appear to have sufficient resolution in the 75 to 200 Hz band to delineate the extent of fractures induced by hydraulic fracturing. The medium velocity structure must be known with a 10% accuracy or better and no major discontinuities should be undetected. For best results, the receiving array must be positioned directly opposite the perforations (same depths) at a horizontal range of 200 to 400 feet from the region to be imaged. Sources of acoustic emission may be detectable down to a single-sensor SNR of 0.25 or somewhat less. These conclusions are limited by the assumptions of this study: good coupling to the formation, acoustic propagation, and accurate knowledge of the velocity structure.

  17. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments.

    PubMed

    Vogt, Frank; White, Lauren

    2015-03-31

    Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO2. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae's sequestration capabilities and productivity are also determined by the cells' chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations. Future investigations will utilize these instrumental and chemometric methodologies for quantitative investigations of the relation between chemical environments and microalgal sequestration capabilities. PMID:25813024

  18. The Contextualized Technology Adaptation Process (CTAP): Optimizing Health Information Technology to Improve Mental Health Systems.

    PubMed

    Lyon, Aaron R; Wasse, Jessica Knaster; Ludwig, Kristy; Zachry, Mark; Bruns, Eric J; Unützer, Jürgen; McCauley, Elizabeth

    2016-05-01

    Health information technologies have become a central fixture in the mental healthcare landscape, but few frameworks exist to guide their adaptation to novel settings. This paper introduces the contextualized technology adaptation process (CTAP) and presents data collected during Phase 1 of its application to measurement feedback system development in school mental health. The CTAP is built on models of human-centered design and implementation science and incorporates repeated mixed methods assessments to guide the design of technologies to ensure high compatibility with a destination setting. CTAP phases include: (1) Contextual evaluation, (2) Evaluation of the unadapted technology, (3) Trialing and evaluation of the adapted technology, (4) Refinement and larger-scale implementation, and (5) Sustainment through ongoing evaluation and system revision. Qualitative findings from school-based practitioner focus groups are presented, which provided information for CTAP Phase 1, contextual evaluation, surrounding education sector clinicians' workflows, types of technologies currently available, and influences on technology use. Discussion focuses on how findings will inform subsequent CTAP phases, as well as their implications for future technology adaptation across content domains and service sectors. PMID:25677251

  19. Low Temperature Adaptation Is Not the Opposite Process of High Temperature Adaptation in Terms of Changes in Amino Acid Composition

    PubMed Central

    Yang, Ling-Ling; Tang, Shu-Kun; Huang, Ying; Zhi, Xiao-Yang

    2015-01-01

    Previous studies focused on psychrophilic adaptation generally have demonstrated that multiple mechanisms work together to increase protein flexibility and activity, as well as to decrease the thermostability of proteins. However, the relationship between high and low temperature adaptations remains unclear. To investigate this issue, we collected the available predicted whole proteome sequences of species with different optimal growth temperatures, and analyzed amino acid variations and substitutional asymmetry in pairs of homologous proteins from related species. We found that changes in amino acid composition associated with low temperature adaptation did not exhibit a coherent opposite trend when compared with changes in amino acid composition associated with high temperature adaptation. This result indicates that during their evolutionary histories the proteome-scale evolutionary patterns associated with prokaryotes exposed to low temperature environments were distinct from the proteome-scale evolutionary patterns associated with prokaryotes exposed to high temperature environments in terms of changes in amino acid composition of the proteins. PMID:26614525

  20. Two-Dimensional Systolic Array For Kalman-Filter Computing

    NASA Technical Reports Server (NTRS)

    Chang, Jaw John; Yeh, Hen-Geul

    1988-01-01

    Two-dimensional, systolic-array, parallel data processor performs Kalman filtering in real time. Algorithm rearranged to be Faddeev algorithm for generalized signal processing. Algorithm mapped onto very-large-scale integrated-circuit (VLSI) chip in two-dimensional, regular, simple, expandable array of concurrent processing cells. Processor does matrix/vector-based algebraic computations. Applications include adaptive control of robots, remote manipulators and flexible structures and processing radar signals to track targets.

  1. A review of culturally adapted versions of the Oswestry Disability Index: the adaptation process, construct validity, test-retest reliability and internal consistency.

    PubMed

    Sheahan, Peter J; Nelson-Wong, Erika J; Fischer, Steven L

    2015-12-01

    The Oswestry Disability Index (ODI) is a self-report-based outcome measure used to quantify the extent of disability related to low back pain (LBP), a substantial contributor to workplace absenteeism. The ODI tool has been adapted for use by patients in several non-English speaking nations. It is unclear, however, if these adapted versions of the ODI are as credible as the original ODI developed for English-speaking nations. The objective of this study was to conduct a review of the literature to identify culturally adapted versions of the ODI and to report on the adaptation process, construct validity, test-retest reliability and internal consistency of these ODIs. Following a pragmatic review process, data were extracted from each study with regard to these four outcomes. While most studies applied adaptation processes in accordance with best-practice guidelines, there were some deviations. However, all studies reported high-quality psychometric properties: group mean construct validity was 0.734 ± 0.094 (indicated via a correlation coefficient), test-retest reliability was 0.937 ± 0.032 (indicated via an intraclass correlation coefficient) and internal consistency was 0.876 ± 0.047 (indicated via Cronbach's alpha). Researchers can be confident when using any of these culturally adapted ODIs, or when comparing and contrasting results between cultures where these versions were employed. Implications for Rehabilitation Low back pain is the second leading cause of disability in the world, behind only cancer. The Oswestry Disability Index (ODI) has been developed as a self-report outcome measure of low back pain for administration to patients. An understanding of the various cross-cultural adaptations of the ODI is important for more concerted multi-national research efforts. This review examines 16 cross-cultural adaptations of the ODI and should inform the work of health care and rehabilitation professionals. PMID:25738913

  2. Processing of chemical sensor arrays with a biologically inspired model of olfactory coding.

    PubMed

    Raman, Baranidharan; Sun, Ping A; Gutierrez-Galvez, Agustin; Gutierrez-Osuna, Ricardo

    2006-07-01

    This paper presents a computational model for chemical sensor arrays inspired by the first two stages in the olfactory pathway: distributed coding with olfactory receptor neurons and chemotopic convergence onto glomerular units. We propose a monotonic concentration-response model that maps conventional sensor-array inputs into a distributed activation pattern across a large population of neuroreceptors. Projection onto glomerular units in the olfactory bulb is then simulated with a self-organizing model of chemotopic convergence. The pattern recognition performance of the model is characterized using a database of odor patterns from an array of temperature modulated chemical sensors. The chemotopic code achieved by the proposed model is shown to improve the signal-to-noise ratio available at the sensor inputs while being consistent with results from neurobiology. PMID:16856663

  3. Facile and flexible fabrication of gapless microlens arrays using a femtosecond laser microfabrication and replication process

    NASA Astrophysics Data System (ADS)

    Liu, Hewei; Chen, Feng; Yang, Qing; Hu, Yang; Shan, Chao; He, Shengguan; Si, Jinhai; Hou, Xun

    2012-03-01

    We demonstrate a facile and flexible method to fabricate close-packed microlens arrays (MLAs). Glass molding templates with concave structures are produced by a femtosecond (fs)-laser point-by-point exposures followed by a chemical treatment, and convex MLAs are subsequently replicated on Poly(methyl methacrylate) [PMMA] using a hot embossing system. As an example, a microlens array (MLA) with 60-μm rectangular-shaped spherical microlenses is fabricated. Optical performances of the MLAs, such as focusing and imaging properties are tested, and the results demonstrate the uniformity and smooth surfaces of the MLA. We also demonstrated that the shape and alignment of the arrays could be controlled by different parameters.

  4. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  5. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  6. Coevolution of information processing and topology in hierarchical adaptive random Boolean networks

    NASA Astrophysics Data System (ADS)

    Górski, Piotr J.; Czaplicka, Agnieszka; Hołyst, Janusz A.

    2016-02-01

    Random Boolean Networks (RBNs) are frequently used for modeling complex systems driven by information processing, e.g. for gene regulatory networks (GRNs). Here we propose a hierarchical adaptive random Boolean Network (HARBN) as a system consisting of distinct adaptive RBNs (ARBNs) - subnetworks - connected by a set of permanent interlinks. We investigate mean node information, mean edge information as well as mean node degree. Information measures and internal subnetworks topology of HARBN coevolve and reach steady-states that are specific for a given network structure. The main natural feature of ARBNs, i.e. their adaptability, is preserved in HARBNs and they evolve towards critical configurations which is documented by power law distributions of network attractor lengths. The mean information processed by a single node or a single link increases with the number of interlinks added to the system. The mean length of network attractors and the mean steady-state connectivity possess minima for certain specific values of the quotient between the density of interlinks and the density of all links in networks. It means that the modular network displays extremal values of its observables when subnetworks are connected with a density a few times lower than a mean density of all links.

  7. ADAPT: building conceptual models of the physical and biological processes across permafrost landscapes

    NASA Astrophysics Data System (ADS)

    Allard, M.; Vincent, W. F.; Lemay, M.

    2012-12-01

    Fundamental and applied permafrost research is called upon in Canada in support of environmental protection, economic development and for contributing to the international efforts in understanding climatic and ecological feedbacks of permafrost thawing under a warming climate. The five year "Arctic Development and Adaptation to Permafrost in Transition" program (ADAPT) funded by NSERC brings together 14 scientists from 10 Canadian universities and involves numerous collaborators from academia, territorial and provincial governments, Inuit communities and industry. The geographical coverage of the program encompasses all of the permafrost regions of Canada. Field research at a series of sites across the country is being coordinated. A common protocol for measuring ground thermal and moisture regime, characterizing terrain conditions (vegetation, topography, surface water regime and soil organic matter contents) is being applied in order to provide inputs for designing a general model to provide an understanding of transfers of energy and matter in permafrost terrain, and the implications for biological and human systems. The ADAPT mission is to produce an 'Integrated Permafrost Systems Science' framework that will be used to help generate sustainable development and adaptation strategies for the North in the context of rapid socio-economic and climate change. ADAPT has three major objectives: to examine how changing precipitation and warming temperatures affect permafrost geosystems and ecosystems, specifically by testing hypotheses concerning the influence of the snowpack, the effects of water as a conveyor of heat, sediments, and carbon in warming permafrost terrain and the processes of permafrost decay; to interact directly with Inuit communities, the public sector and the private sector for development and adaptation to changes in permafrost environments; and to train the new generation of experts and scientists in this critical domain of research in Canada

  8. Adaptive phase-locked fiber array with wavefront phase tip-tilt compensation using piezoelectric fiber positioners

    NASA Astrophysics Data System (ADS)

    Liu, Ling; Vorontsov, Mikhail A.; Polnau, Ernst; Weyrauch, Thomas; Beresnev, Leonid A.

    2007-09-01

    In this paper, we present the recent development of a conformal optical system with three adaptive phase-locked fiber elements. The coherent beam combining based on stochastic parallel gradient descent (SPGD) algorithm is investigated. We implement both phase-locking control and wavefront phase tip-tilt control in our conformal optical system. The phase-locking control is performed with fiber-coupled lithium niobate phase shifters which are modulated by an AVR micro-processor based SPGD controller. The perturbation rate of this SPGD controller is ~95,000 iterations per second. Phase-locking compensation bandwidth for phase distortion amplitude of 2π-radian phase shift is >100Hz. The tip-tilt control is realized with piezoelectric fiber positioners which are modulated by a computer-based software SPGD controller. The perturbation rate of the tip-tilt SPGD controller is up to ~950 iterations per second. The tip-tilt compensation bandwidth using fiber positioners is ~10Hz at 60-μrad. jitter swing angle.

  9. Performance-Based Adaptive Fuzzy Tracking Control for Networked Industrial Processes.

    PubMed

    Wang, Tong; Qiu, Jianbin; Yin, Shen; Gao, Huijun; Fan, Jialu; Chai, Tianyou

    2016-08-01

    In this paper, the performance-based control design problem for double-layer networked industrial processes is investigated. At the device layer, the prescribed performance functions are first given to describe the output tracking performance, and then by using backstepping technique, new adaptive fuzzy controllers are designed to guarantee the tracking performance under the effects of input dead-zone and the constraint of prescribed tracking performance functions. At operation layer, by considering the stochastic disturbance, actual index value, target index value, and index prediction simultaneously, an adaptive inverse optimal controller in discrete-time form is designed to optimize the overall performance and stabilize the overall nonlinear system. Finally, a simulation example of continuous stirred tank reactor system is presented to show the effectiveness of the proposed control method. PMID:27168605

  10. Two Adaptation Processes in Auditory Hair Cells Together Can Provide an Active Amplifier

    PubMed Central

    Vilfan, Andrej; Duke, Thomas

    2003-01-01

    The hair cells of the vertebrate inner ear convert mechanical stimuli to electrical signals. Two adaptation mechanisms are known to modify the ionic current flowing through the transduction channels of the hair bundles: a rapid process involves Ca2+ ions binding to the channels; and a slower adaptation is associated with the movement of myosin motors. We present a mathematical model of the hair cell which demonstrates that the combination of these two mechanisms can produce “self-tuned critical oscillations”, i.e., maintain the hair bundle at the threshold of an oscillatory instability. The characteristic frequency depends on the geometry of the bundle and on the Ca2+ dynamics, but is independent of channel kinetics. Poised on the verge of vibrating, the hair bundle acts as an active amplifier. However, if the hair cell is sufficiently perturbed, other dynamical regimes can occur. These include slow relaxation oscillations which resemble the hair bundle motion observed in some experimental preparations. PMID:12829475

  11. Improved electromagnetic induction processing with novel adaptive matched filter and matched subspace detection

    NASA Astrophysics Data System (ADS)

    Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.

    2016-05-01

    This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.

  12. PROCESSING TECHNIQUES FOR DISCRIMINATION BETWEEN BURIED UXO AND CLUTTER USING MULTISENSOR ARRAY DATA

    EPA Science Inventory

    The overall objective of this project is to develop reliable techniques for discriminating between buried UXO and clutter using multisensor electromagnetic induction sensor array data. The basic idea is to build on existing research which exploits differences in shape between or...

  13. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    NASA Astrophysics Data System (ADS)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  14. Recognition Time for Letters and Nonletters: Effects of Serial Position, Array Size, and Processing Order.

    ERIC Educational Resources Information Center

    Mason, Mildred

    1982-01-01

    Three experiments report additional evidence that it is a mistake to account for all interletter effects solely in terms of sensory variables. These experiments attest to the importance of structural variables such as retina location, array size, and ordinal position. (Author/PN)

  15. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants.

    PubMed

    Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M

    2014-01-01

    The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation. PMID:25571014

  16. Maternal migration and child health: An analysis of disruption and adaptation processes in Benin

    PubMed Central

    Smith-Greenaway, Emily; Madhavan, Sangeetha

    2016-01-01

    Children of migrant mothers have lower vaccination rates compared to their peers with non-migrant mothers in low-income countries. Explanations for this finding are typically grounded in the disruption and adaptation perspectives of migration. Researchers argue that migration is a disruptive process that interferes with women’s economic well-being and social networks, and ultimately their health-seeking behaviors. With time, however, migrant women adapt to their new settings, and their health behaviors improve. Despite prominence in the literature, no research tests the salience of these perspectives to the relationship between maternal migration and child vaccination. We innovatively leverage Demographic and Health Survey data to test the extent to which disruption and adaptation processes underlie the relationship between maternal migration and child vaccination in the context of Benin—a West African country where migration is common and child vaccination rates have declined in recent years. By disaggregating children of migrants according to whether they were born before or after their mother’s migration, we confirm that migration does not lower children’s vaccination rates in Benin. In fact, children born after migration enjoy a higher likelihood of vaccination, whereas their peers born in the community from which their mother eventually migrates are less likely to be vaccinated. Although we find no support for the disruption perspective of migration, we do find evidence of adaptation: children born after migration have an increased likelihood of vaccination the longer their mother resides in the destination community prior to their birth. PMID:26463540

  17. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    PubMed

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  18. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system

    PubMed Central

    Schrode, Katrina M.; Bee, Mark A.

    2015-01-01

    ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  19. Dynamic templating: a large area processing route for the assembly of periodic arrays of sub-micrometer and nanoscale structures

    NASA Astrophysics Data System (ADS)

    Farzinpour, Pouyan; Sundar, Aarthi; Gilroy, Kyle D.; Eskin, Zachary E.; Hughes, Robert A.; Neretina, Svetlana

    2013-02-01

    A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process proceeds. The dynamic template consists of a pedestal of a sacrificial material, typically antimony, upon which an ultrathin layer of a second material is deposited. When heated, antimony sublimation results in a continuous reduction in template size where the motion of the sublimation fronts direct the diffusion of atoms of the second material to a predetermined location. The route has broad applicability, having already produced periodic arrays of gold, silver, copper, platinum, nickel, cobalt, germanium and Au-Ag alloys on substrates as diverse as silicon, sapphire, silicon-carbide, graphene and glass. Requiring only modest levels of instrumentation, the process provides an enabling route for any reasonably equipped researcher to fabricate periodic arrays that would otherwise require advanced fabrication facilities.A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process

  20. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development

  1. Can survival processing enhance story memory? Testing the generalizability of the adaptive memory framework.

    PubMed

    Seamon, John G; Bohn, Justin M; Coddington, Inslee E; Ebling, Maritza C; Grund, Ethan M; Haring, Catherine T; Jang, Sue-Jung; Kim, Daniel; Liong, Christopher; Paley, Frances M; Pang, Luke K; Siddique, Ashik H

    2012-07-01

    Research from the adaptive memory framework shows that thinking about words in terms of their survival value in an incidental learning task enhances their free recall relative to other semantic encoding strategies and intentional learning (Nairne, Pandeirada, & Thompson, 2008). We found similar results. When participants used incidental survival encoding for a list of words (e.g., "Will this object enhance my survival if I were stranded in the grasslands of a foreign land?"), they produced better free recall on a surprise test than did participants who intentionally tried to remember those words (Experiment 1). We also found this survival processing advantage when the words were presented within the context of a survival or neutral story (Experiment 2). However, this advantage did not extent to memory for a story's factual content, regardless of whether the participants were tested by cued recall (Experiment 3) or free recall (Experiments 4-5). Listening to a story for understanding under intentional or incidental learning conditions was just as good as survival processing for remembering story content. The functionalist approach to thinking about memory as an evolutionary adaptation designed to solve reproductive fitness problems provides a different theoretical framework for research, but it is not yet clear if survival processing has general applicability or is effective only for processing discrete stimuli in terms of fitness-relevant scenarios from our past. PMID:22288816

  2. An application of space-time adaptive processing to airborne and spaceborne monostatic and bistatic radar systems

    NASA Astrophysics Data System (ADS)

    Czernik, Richard James

    A challenging problem faced by Ground Moving Target Indicator (GMTI) radars on both airborne and spaceborne platforms is the ability to detect slow moving targets due the presence of non-stationary and heterogeneous ground clutter returns. Space-Time Adaptive Processing techniques process both the spatial signals from an antenna array as well as radar pulses simultaneously to aid in mitigating this clutter which has an inherent Doppler shift due to radar platform motion, as well as spreading across Angle-Doppler space attributable to a variety of factors. Additional problems such as clutter aliasing, widening of the clutter notch, and range dependency add additional complexity when the radar is bistatic in nature, and vary significantly as the bistatic radar geometry changes with respect to the targeted location. The most difficult situation is that of a spaceborne radar system due to its high velocity and altitude with respect to the earth. A spaceborne system does however offer several advantages over an airborne system, such as the ability to cover wide areas and to provide access to areas denied to airborne platforms. This dissertation examines both monostatic and bistatic radar performance based upon a computer simulation developed by the author, and explores the use of both optimal STAP and reduced dimension STAP architectures to mitigate the modeled clutter returns. Factors such as broadband jamming, wind, and earth rotation are considered, along with their impact on the interference covariance matrix, constructed from sample training data. Calculation of the covariance matrix in near real time based upon extracted training data is computer processor intensive and reduced dimension STAP architectures relieve some of the computation burden. The problems resulting from extending both monostatic and bistatic radar systems to space are also simulated and studied.

  3. Speech Enhancement Using Microphone Arrays.

    NASA Astrophysics Data System (ADS)

    Adugna, Eneyew

    Arrays of sensors have been employed effectively in communication systems for the directional transmission and reception of electromagnetic waves. Among the numerous benefits, this helps improve the signal-to-interference ratio (SIR) of the signal at the receiver. Arrays have since been used in related areas that employ propagating waves for the transmission of information. Several investigators have successfully adopted array principles to acoustics, sonar, seismic, and medical imaging. In speech applications the microphone is used as the sensor for acoustic data acquisition. The performance of subsequent speech processing algorithms--such as speech recognition or speaker recognition--relies heavily on the level of interference within the transduced or recorded speech signal. The normal practice is to use a single, hand-held or head-mounted, microphone. Under most environmental conditions, i.e., environments where other acoustic sources are also active, the speech signal from a single microphone is a superposition of acoustic signals present in the environment. Such cases represent a lower SIR value. To alleviate this problem an array of microphones--linear array, planar array, and 3-dimensional arrays--have been suggested and implemented. This work focuses on microphone arrays in room environments where reverberation is the main source of interference. The acoustic wave incident on the array from a point source is sampled and recorded by a linear array of sensors along with reflected waves. Array signal processing algorithms are developed and used to remove reverberations from the signal received by the array. Signals from other positions are considered as interference. Unlike most studies that deal with plane waves, we base our algorithm on spherical waves originating at a source point. This is especially true for room environments. The algorithm consists of two stages--a first stage to locate the source and a second stage to focus on the source. The first part

  4. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    NASA Astrophysics Data System (ADS)

    Due-Hansen, J.; Midtbø, K.; Poppe, E.; Summanwar, A.; Jensen, G. U.; Breivik, L.; Wang, D. T.; Schjølberg-Henriksen, K.

    2012-07-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging.

  5. Adaptive phase estimation and its application in EEG analysis of word processing.

    PubMed

    Schack, B; Rappelsberger, P; Weiss, S; Möller, E

    1999-10-30

    Oscillations are a general phenomenon of neuronal activity during information processing. Mostly, widespread networks are involved in brain functioning. In order to investigate network activity coherence analysis turned out to be a useful tool for examining the functional relationship between different cortical areas. This parameter allows the investigation of synchronisation phenomena with regard to defined frequencies or frequency bands. Coherence and cross phase are closely connected spectral parameters. Coherence may be understood as a measure of phase stability. Whereas coherence describes the amount of information transfer, the corresponding phase, from which time delays can be computed, hints at the direction of information transfer. Mental processes can be very brief and coupling between different areas may be highly dynamic. For this reason a two-dimensional approach of adaptive filtering was developed to estimate coherence and phase continuously in time. Statistical and dynamic properties of instantaneous phase are discussed. In order to demonstrate the value of this method for studying higher cognitive processes the method was applied to EEG recorded during word processing. During visual presentation of abstract nouns an information transfer from visual areas to frontal association areas in the Alpha1 frequency band could be verified within the first 400 ms. The Alpha1 band predominately seems to reflect sensory processing and attentional processes. In addition to conventional coherence analyses during word processing phase estimations may yield valuable new insights into the physiological mechanisms during word processing. PMID:10598864

  6. Adapting Semantic Natural Language Processing Technology to Address Information Overload in Influenza Epidemic Management

    PubMed Central

    Keselman, Alla; Rosemblat, Graciela; Kilicoglu, Halil; Fiszman, Marcelo; Jin, Honglan; Shin, Dongwook; Rindflesch, Thomas C.

    2013-01-01

    Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

  7. Adapting Semantic Natural Language Processing Technology to Address Information Overload in Influenza Epidemic Management.

    PubMed

    Keselman, Alla; Rosemblat, Graciela; Kilicoglu, Halil; Fiszman, Marcelo; Jin, Honglan; Shin, Dongwook; Rindflesch, Thomas C

    2010-12-01

    Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

  8. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    SciTech Connect

    Sen, Satyabrata

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  9. Serum testosterone levels and excessive erythrocytosis during the process of adaptation to high altitudes

    PubMed Central

    Gonzales, Gustavo F

    2013-01-01

    Populations living at high altitudes (HAs), particularly in the Peruvian Andes, are characterized by a mixture of subjects with erythrocytosis (16 g dl−121 g dl−1). Elevated haemoglobin values (EE) are associated with chronic mountain sickness, a condition reflecting the lack of adaptation to HA. According to current data, native men from regions of HA are not adequately adapted to live at such altitudes if they have elevated serum testosterone levels. This seems to be due to an increased conversion of dehydroepiandrosterone sulphate (DHEAS) to testosterone. Men with erythrocytosis at HAs show higher serum androstenedione levels and a lower testosterone/androstenedione ratio than men with EE, suggesting reduced 17beta-hydroxysteroid dehydrogenase (17beta-HSD) activity. Lower 17beta-HSD activity via Δ4-steroid production in men with erythrocytosis at HA may protect against elevated serum testosterone levels, thus preventing EE. The higher conversion of DHEAS to testosterone in subjects with EE indicates increased 17beta-HSD activity via the Δ5-pathway. Currently, there are various situations in which people live (human biodiversity) with low or high haemoglobin levels at HA. Antiquity could be an important adaptation component for life at HA, and testosterone seems to participate in this process. PMID:23524530

  10. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    SciTech Connect

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-06-15

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns.

  11. Dynamic templating: a large area processing route for the assembly of periodic arrays of sub-micrometer and nanoscale structures.

    PubMed

    Farzinpour, Pouyan; Sundar, Aarthi; Gilroy, Kyle D; Eskin, Zachary E; Hughes, Robert A; Neretina, Svetlana

    2013-03-01

    A substrate-based templated assembly route has been devised which offers large-area, high-throughput capabilities for the fabrication of periodic arrays of sub-micrometer and nanometer-scale structures. The approach overcomes a significant technological barrier to the widespread use of substrate-based templated assembly by eliminating the need for periodic templates having nanoscale features. Instead, it relies upon the use of a dynamic template with dimensions that evolve in time from easily fabricated micrometer dimensions to those on the nanoscale as the assembly process proceeds. The dynamic template consists of a pedestal of a sacrificial material, typically antimony, upon which an ultrathin layer of a second material is deposited. When heated, antimony sublimation results in a continuous reduction in template size where the motion of the sublimation fronts direct the diffusion of atoms of the second material to a predetermined location. The route has broad applicability, having already produced periodic arrays of gold, silver, copper, platinum, nickel, cobalt, germanium and Au-Ag alloys on substrates as diverse as silicon, sapphire, silicon-carbide, graphene and glass. Requiring only modest levels of instrumentation, the process provides an enabling route for any reasonably equipped researcher to fabricate periodic arrays that would otherwise require advanced fabrication facilities. PMID:23354129

  12. CMOS Geiger photodiode array with integrated signal processing for imaging of 2D objects using quantum dots

    NASA Astrophysics Data System (ADS)

    Stapels, Christopher J.; Lawrence, William G.; Gurjar, Rajan S.; Johnson, Erik B.; Christian, James F.

    2008-08-01

    Geiger-mode photodiodes (GPD) act as binary photon detectors that convert analog light intensity into digital pulses. Fabrication of arrays of GPD in a CMOS environment simplifies the integration of signal-processing electronics to enhance the performance and provide a low-cost detector-on-a-chip platform. Such an instrument facilitates imaging applications with extremely low light and confined volumes. High sensitivity reading of small samples enables twodimensional imaging of DNA arrays and for tracking single molecules, and observing their dynamic behavior. In this work, we describe the performance of a prototype imaging detector of GPD pixels, with integrated active quenching for use in imaging of 2D objects using fluorescent labels. We demonstrate the integration of on-chip memory and a parallel readout interface for an array of CMOS GPD pixels as progress toward an all-digital detector on a chip. We also describe advances in pixel-level signal processing and solid-state photomultiplier developments.

  13. Adaptive Sparse Signal Processing for Discrimination of Satellite-based Radiofrequency (RF) Recordings of Lightning Events

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Smith, D. A.; Heavner, M.; Hamlin, T.

    2014-12-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite, launched in 1997, provided a rich RF lightning database. Application of modern pattern recognition techniques to this dataset may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We extend sparse signal processing techniques to radiofrequency (RF) transient signals, and specifically focus on improved signature extraction using sparse representations in data-adaptive dictionaries. We present various processing options and classification results for on-board discharges, and discuss robustness and potential for capability development.

  14. Adaptation of the IBM ECR (electric cantilever robot) robot to plutonium processing applications

    SciTech Connect

    Armantrout, G.A.; Pedrotti, L.R. ); Halter, E.A.; Crossfield, M. )

    1990-12-01

    The changing regulatory climate in the US is adding increasing incentive to reduce operator dose and TRU waste for DOE plutonium processing operations. To help achieve that goal the authors have begun adapting a small commercial overhead gantry robot, the IBM electric cantilever robot (ECR), to plutonium processing applications. Steps are being taken to harden this robot to withstand the dry, often abrasive, environment within a plutonium glove box and to protect the electronic components against alpha radiation. A mock-up processing system for the reduction of the oxide to a metal was prepared and successfully demonstrated. Design of a working prototype is now underway using the results of this mock-up study. 7 figs., 4 tabs.

  15. Simultaneous processing of photographic and accelerator array data from sled impact experiment

    NASA Astrophysics Data System (ADS)

    Ash, M. E.

    1982-12-01

    A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.

  16. High-quality laser cutting of ceramics through adapted process techniques

    NASA Astrophysics Data System (ADS)

    Toenshoff, Hans K.; Gonschior, Martin

    1994-02-01

    Laser cutting of ceramics is a promising alternative to conventional machining methods. In this paper, processing results using several lasers and beam guidance optics to cut different oxide and non-oxide ceramics are presented. Adapted process parameters in pulsed mode operation provide high quality cut surfaces at acceptable feed rates. Especially Nd:YAG lasers can be used for cutting extremely brittle ceramics. The use of fiber optics for beam guidance, however, is limited to certain ceramics with high fracture toughness, due to a loss in beam quality. In laser cutting of ceramics, thermally-induced crack damage is one of the main problems preventing a wider use of this method in industry. Several methods were investigated in order to reduce crack formation. Adapted pulse parameters, calculated by a theoretical model, and also a newly-developed process control system lead to a remarkable reduction of crack damage. Crack-free cutting can be obtained by preheating the workpiece above a temperature of 1.100 degree(s)C. Based on these investigations, requirements on laser systems for ceramics cutting are worked out.

  17. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka. PMID:24385138

  18. Solution-Processed Large-Area Nanocrystal Arrays of Metal-Organic Frameworks as Wearable, Ultrasensitive, Electronic Skin for Health Monitoring.

    PubMed

    Fu, Xiaolong; Dong, Huanli; Zhen, Yonggang; Hu, Wenping

    2015-07-15

    Pressure sensors based on solution-processed metal-organic frameworks nanowire arrays are fabricated with very low cost, flexibility, high sensitivity, and ease of integration into sensor arrays. Furthermore, the pressure sensors are suitable for monitoring and diagnosing biomedical signals such as radial artery pressure waveforms in real time. PMID:25760306

  19. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex

    PubMed Central

    Baker, Pamela M.; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-01-01

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. SIGNIFICANCE STATEMENT Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to

  20. Novel human-robot interface integrating real-time visual tracking and microphone-array signal processing

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi

    1998-10-01

    This paper proposes a novel human robot interface that is an integration of real time visual tracking and microphone array signal processing. The proposed interface is intended to be used as a speech signal input method for human collaborative robot. Utilizing it, the robot can clearly listen human master's voice remotely as if a wireless microphone were put just in front of the master. A novel technique to form `acoustic focus' at human face is developed. To track and locate the face dynamically, real time face tracking and stereo vision are utilized. To make the acoustic focus at the face, microphones array is utilized. Setting gain and delay of each microphone properly enables to form acoustic focus at desired location. The gain and delay are determined based upon the location of the face. Results of preliminary experiments and simulations demonstrate feasibility of the proposed idea.

  1. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  2. Motion adaptive signal integration-high dynamic range (MASI-HDR) video processing for dynamic platforms

    NASA Astrophysics Data System (ADS)

    Piacentino, Michael R.; Berends, David C.; Zhang, David C.; Gudis, Eduardo

    2013-05-01

    Two of the biggest challenges in designing U×V vision systems are properly representing high dynamic range scene content using low dynamic range components and reducing camera motion blur. SRI's MASI-HDR (Motion Adaptive Signal Integration-High Dynamic Range) is a novel technique for generating blur-reduced video using multiple captures for each displayed frame while increasing the effective camera dynamic range by four bits or more. MASI-HDR processing thus provides high performance video from rapidly moving platforms in real-world conditions in low latency real time, enabling even the most demanding applications on air, ground and water.

  3. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  4. Array tomography: imaging stained arrays.

    PubMed

    Micheva, Kristina D; O'Rourke, Nancy; Busse, Brad; Smith, Stephen J

    2010-11-01

    Array tomography is a volumetric microscopy method based on physical serial sectioning. Ultrathin sections of a plastic-embedded tissue are cut using an ultramicrotome, bonded in an ordered array to a glass coverslip, stained as desired, and imaged. The resulting two-dimensional image tiles can then be reconstructed computationally into three-dimensional volume images for visualization and quantitative analysis. The minimal thickness of individual sections permits high-quality rapid staining and imaging, whereas the array format allows reliable and convenient section handling, staining, and automated imaging. Also, the physical stability of the arrays permits images to be acquired and registered from repeated cycles of staining, imaging, and stain elution, as well as from imaging using multiple modalities (e.g., fluorescence and electron microscopy). Array tomography makes it possible to visualize and quantify previously inaccessible features of tissue structure and molecular architecture. However, careful preparation of the tissue is essential for successful array tomography; these steps can be time-consuming and require some practice to perfect. In this protocol, tissue arrays are imaged using conventional wide-field fluorescence microscopy. Images can be captured manually or, with the appropriate software and hardware, the process can be automated. PMID:21041399

  5. Workload-Matched Adaptive Automation Support of Air Traffic Controller Information Processing Stages

    NASA Technical Reports Server (NTRS)

    Kaber, David B.; Prinzel, Lawrence J., III; Wright, Melanie C.; Clamann, Michael P.

    2002-01-01

    Adaptive automation (AA) has been explored as a solution to the problems associated with human-automation interaction in supervisory control environments. However, research has focused on the performance effects of dynamic control allocations of early stage sensory and information acquisition functions. The present research compares the effects of AA to the entire range of information processing stages of human operators, such as air traffic controllers. The results provide evidence that the effectiveness of AA is dependent on the stage of task performance (human-machine system information processing) that is flexibly automated. The results suggest that humans are better able to adapt to AA when applied to lower-level sensory and psychomotor functions, such as information acquisition and action implementation, as compared to AA applied to cognitive (analysis and decision-making) tasks. The results also provide support for the use of AA, as compared to completely manual control. These results are discussed in terms of implications for AA design for aviation.

  6. Effect of Margin Design and Processing Steps on Marginal Adaptation of Captek Restorations

    PubMed Central

    Shih, Amy; Flinton, Robert; Vaidyanathan, Jayalakshmi; Vaidyanathan, Tritala

    2011-01-01

    This study examined the effect of four margin designs on marginal adaptation of Captek crowns during selected processing steps. Twenty-four Captek crowns were fabricated, six each of four margin designs: shoulder (Group A), chamfer (Group B), chamfer with bevel (Group C), and shoulder with bevel (Group D). Marginal discrepancies between crowns and matching dies were measured at selected points for each sample at the coping stage (Stage 1), following porcelain application (Stage 2) and cementation (Stage 3). Digital imaging methods were used to measure marginal gap. The results indicate decreasing trend of margin gap as a function of margin design in the order A>B>C>D. Between processing steps, the trend was in the order Stage 3 < Stage 1 < Stage 2. Porcelain firing had no significant effect on marginal adaptation, but cementation decreased the marginal gap. Generally, the margin gap in Captek restorations were in all cases less than the reported acceptable range of margin gaps for ceramometal restorations. These results are clinically favorable outcomes and may be associated with the ductility and burnishability of matrix phase in Captek metal coping margins. PMID:21991488

  7. [Super sweet corn hybrid sh2 adaptability for industrial canning process].

    PubMed

    Ortiz de Bertorelli, Ligia; De Venanzi, Frank; Alfonzo, Braunnier; Camacho, Candelario

    2002-12-01

    The super sweet corns Krispy king, Victor and 324 (sh2 hybrids) were evaluated to determine their adaptabilities to the industrial canning process as whole kernels. All these hybrids and Bonanza (control) were sown in San Joaquín (Carabobo, Venezuela), harvested and canned. After 110 days storage at room temperature they were analyzed to be compared physically, chemically and sensorially with Bonanza hybrid. Results did not show significant differences among most of the physical characteristics, except for percentage of broken kernels which was higher in 324 hybrid. Chemical parameters showed significant differences (P < 0.05) comparing each super sweet hybrid with Bonanza. The super sweet hybrids presented a higher sugar content and soluble solid of the brine than Bonanza, also a lower pH. The super sweet whole kernel presented a lower soluble solids content than Bonanza but they were not significant (Krispy king and 324). Appearance, odor and overall quality were the same for super sweet hybrids and Bonanza (su). Color, flavor and sweetness were better for 324 than all the other hybrids. Super sweet hybrids presented a very good adaptation to the canning process, having as an advantage that doesn't require sugar addition in the brine and a very good texture (firm and crispy). PMID:12868279

  8. A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.

    PubMed

    Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920

  9. Design and development of a 329-segment tip-tilt piston mirror array for space-based adaptive optics

    NASA Astrophysics Data System (ADS)

    Stewart, Jason B.; Bifano, Thomas G.; Bierden, Paul; Cornelissen, Steven; Cook, Timothy; Levine, B. Martin

    2006-01-01

    We report on the development of a new MEMS deformable mirror (DM) system for the hyper-contrast visible nulling coronagraph architecture designed by the Jet Propulsion Laboratory for NASA's Terrestrial Planet Finding (TPF) mission. The new DM is based largely upon existing lightweight, low power MEMS DM technology at Boston University (BU), tailored to the rigorous optical and mechanical requirements of the nulling coronagraph. It consists of 329-hexagonal segments on a 600μm pitch, each with tip/tilt and piston degrees of freedom. The mirror segments have 1μm of stroke, a tip/tilt range of 600 arc-seconds, and maintain their figure to within 2nm RMS under actuation. The polished polycrystalline silicon mirror segments have a surface roughness of 5nm RMS and an average curvature of 270mm. Designing a mirror segment that maintains its figure during actuation was a very significant challenge faced during DM development. Two design concepts were pursued in parallel to address this challenge. The first design uses a thick, epitaxial grown polysilicon mirror layer to add rigidity to the mirror segment. The second design reduces mirror surface bending by decoupling actuator diaphragm motion from the mirror surface motion. This is done using flexure cuts around the mirror post in the actuator diaphragm. Both DM architectures and their polysilicon microfabrication process are presented. Recent optical and electromechanical characterization results will also be discussed, in addition to plans for further improvement of DM figure to satisfy nulling coronagraph optical requirements.

  10. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  11. Geochemical diversity in S processes mediated by culture-adapted and environmental-enrichments of Acidithiobacillus spp.

    NASA Astrophysics Data System (ADS)

    Bernier, Luc; Warren, Lesley A.

    2007-12-01

    Coupled S speciation and acid generation resulting from S processing associated with five different microbial treatments, all primarily Acidithiobacillus spp. (i.e. autotrophic S-oxidizers) were evaluated in batch laboratory experiments. Microbial treatments included two culture-adapted strains, Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans, their consortia and two environmental enrichments from a mine tailings lake that were determined to be >95% Acidithiobacillus spp., by whole-cell fluorescent hybridization. Using batch experiments simulating acidic mine waters with no carbon amendments, acid generation, and S speciation associated with the oxidation of three S substrates (thiosulfate, tetrathionate, and elemental S) were evaluated. Aseptic controls showed no observable pH decrease over the experimental time course (1 month) for all three S compounds examined. In contrast, pH decreased in all microbial treatments from starting pH values of 4 to 2 or less for all three S substrates. Results show a non-linear relationship between the pH dynamics of the batch cultures and their corresponding sulfate concentrations, and indicate how known microbial S processing pathways have opposite impacts, ultimately on pH dynamics. Associated geochemical modeling indicated negligible abiogenic processes contributing to the observed results, indicating strong microbial control of acid generation extending over pH ranges from 4 to less than 2. However, the observed acid generation rates and associated S speciation were both microbial treatment and substrate-specific. Results reveal a number of novel insights regarding microbial catalysis of S oxidation: (1) metabolic diversity in S processing, as evidenced by the observed geochemical signatures in S chemical speciation and rates of acid generation amongst phylogenetically similar organisms (to the genus level); (2) consortial impacts differ from those of individual strain members; (3) environmental enrichments

  12. Combining molecular evolution and environmental genomics to unravel adaptive processes of MHC class IIB diversity in European minnows (Phoxinus phoxinus)

    PubMed Central

    Collin, Helene; Burri, Reto; Comtesse, Fabien; Fumagalli, Luca

    2013-01-01

    Abstract Host–pathogen interactions are a major evolutionary force promoting local adaptation. Genes of the major histocompatibility complex (MHC) represent unique candidates to investigate evolutionary processes driving local adaptation to parasite communities. The present study aimed at identifying the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of European minnows (Phoxinus phoxinus). To this end, we isolated and genotyped exon 2 of two MHCIIB gene duplicates (DAB1 and DAB3) and 1′665 amplified fragment length polymorphism (AFLP) markers in nine populations, and characterized local bacterial communities by 16S rDNA barcoding using 454 amplicon sequencing. Both MHCIIB loci exhibited signs of historical balancing selection. Whereas genetic differentiation exceeded that of neutral markers at both loci, the populations' genetic diversities were positively correlated with local pathogen diversities only at DAB3. Overall, our results suggest pathogen-mediated local adaptation in European minnows at both MHCIIB loci. While at DAB1 selection appears to favor different alleles among populations, this is only partially the case in DAB3, which appears to be locally adapted to pathogen communities in terms of genetic diversity. These results provide new insights into the importance of host–pathogen interactions in driving local adaptation in the European minnow, and highlight that the importance of adaptive processes driving MHCIIB gene evolution may differ among duplicates within species, presumably as a consequence of alternative selective regimes or different genomic context. Using next-generation sequencing, the present manuscript identifies the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of a cyprinid fish: the European minnow (Phoxinus phoxinus). We highlight that the relative importance of neutral

  13. Light absorption processes and optimization of ZnO/CdTe core-shell nanowire arrays for nanostructured solar cells

    NASA Astrophysics Data System (ADS)

    Michallon, Jérôme; Bucci, Davide; Morand, Alain; Zanuccoli, Mauro; Consonni, Vincent; Kaminski-Cachopo, Anne

    2015-02-01

    The absorption processes of extremely thin absorber solar cells based on ZnO/CdTe core-shell nanowire (NW) arrays with square, hexagonal or triangular arrangements are investigated through systematic computations of the ideal short-circuit current density using three-dimensional rigorous coupled wave analysis. The geometrical dimensions are optimized for optically designing these solar cells: the optimal NW diameter, height and array period are of 200 ± 10 nm, 1-3 μm and 350-400 nm for the square arrangement with CdTe shell thickness of 40-60 nm. The effects of the CdTe shell thickness on the absorption of ZnO/CdTe NW arrays are revealed through the study of two optical key modes: the first one is confining the light into individual NWs, the second one is strongly interacting with the NW arrangement. It is also shown that the reflectivity of the substrate can improve Fabry-Perot resonances within the NWs: the ideal short-circuit current density is increased by 10% for the ZnO/fluorine-doped tin oxide (FTO)/ideal reflector as compared to the ZnO/FTO/glass substrate. Furthermore, the optimized square arrangement absorbs light more efficiently than both optimized hexagonal and triangular arrangements. Eventually, the enhancement factor of the ideal short-circuit current density is calculated as high as 1.72 with respect to planar layers, showing the high optical potentiality of ZnO/CdTe core-shell NW arrays.

  14. Results of an analysis of pre-collapse NTS seismic data using split array cross-correlator processing

    SciTech Connect

    Doll, W.E. . Dept. of Geology)

    1990-07-31

    In this study, the authors applied the split array cross-correlation method to a set of pre-collapse data from the Nevada Test Site. The motivation for the study came from preliminary tests of the method on data from an Imperial Valley flow test, in which the location of the event fell close to the location of the injection well, implying that the method might be effective for noisy or emergent signal detection. This study, using NTS data, is the first detailed analysis of the SACC technique for location of seismic events. This study demonstrates that cross-correlation must be used very carefully, if it can be used at all, for locating primary seismic events. Radiation patterns and local structure which cause significant variations in the waveform can make cross-correlation techniques unreliable. Further study is required to determine whether such methods can be used effectively on enveloped traces. At a minimum, a large array or a set of dense arrays would be needed to locate events. When it is reasonable to assume similar waveforms at all stations in an array, the evidence in this report indicates that the SACC method is robust over a wide range of values of the control parameters. Because it provides an estimate of the likelihood for each point in a grid, the SACC method would be useful in noisy data where the approximate location of the epicenter is known. The images formed by SACC processing could be treated as a type of probability contour map for such data. 5 refs., 12 figs.

  15. The Process of Adaptation of a Community-Level, Evidence-Based Intervention for HIV-Positive African American Men Who Have Sex with Men in Two Cities

    ERIC Educational Resources Information Center

    Robinson, Beatrice E.; Galbraith, Jennifer S.; Lund, Sharon M.; Hamilton, Autumn R.; Shankle, Michael D.

    2012-01-01

    We describe the process of adapting a community-level, evidence-based behavioral intervention (EBI), Community PROMISE, for HIV-positive African American men who have sex with men (AAMSM). The Centers for Disease Control and Prevention (CDC) Map of the Adaptation Process (MAP) guided the adaptation process for this new target population by two…

  16. Application Of The Time-Frequency Polarization Analysis Of The Wavefield For Seismic Noise Array Processing

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Rosa-Cintas, S.; Rosa-Herranz, J. L.; Molina-Palacios, S.; Martinez-Espla, J. J.

    2011-12-01

    Microzonation studies using ambient noise measurements constitute an extended and useful procedure for determine the local soil characteristics and its response due to an earthquake. Several methods exist for analyzing the noise measurements, being the most popular the horizontal-to-vertical spectral ratio (H/V) and the array techniques, i.e. the frequency-wavenumber (F-K) transform. Many works exist about this topic and it stills being an ongoing debate about ambient noise composition, whether body or surface waves constitute most of it, showing the importance of identifying the different kinds of waves presents in a seismic record. In this work we utilize a new method of time-frequency polarization analysis, based on the stationary wavelet packet transform, to investigate how the polarization characteristics of the wavefield influence in the application of ambient noise techniques. The signals are divided in different bands, according to their reciprocal ellipticity values and then the H/V method and the F-K array analysis are computed for each band. The qualitative and quantitative comparison between the original curve and the obtained for the analyzed intervals provide information about the signals composition, showing that the major components of the seismic noise present reciprocal ellipticity values lower than 0.5. The efficient application of the studied techniques by using just the main a part of the entire signal, [0 - 0.5], is also evaluated, showing favorable results.

  17. A new process for fabricating nanodot arrays on selective regions with diblock copolymer thin film

    NASA Astrophysics Data System (ADS)

    Park, Dae-Ho

    2007-09-01

    A procedure for micropatterning a single layer of nanodot arrays in selective regions is demonstrated by using thin films of polystyrene-b-poly(t-butyl acrylate) (PS-b-PtBA) diblock copolymer. The thin-film self-assembled into hexagonally arranged PtBA nanodomains in a PS matrix on a substrate by solvent annealing with 1,4-dioxane. The PtBA nanodomains were converted into poly(acrylic acid) (PAA) having carboxylic-acid-functionalized nanodomains by exposure to hydrochloric acid vapor, or were removed by ultraviolet (UV) irradiation to generate vacant sites without any functional groups due to the elimination of PtBA domains. By sequential treatment with aqueous sodium bicarbonate and aqueous zinc acetate solution, zinc cations were selectively loaded only on the carboxylic-acid-functionalized nanodomains prepared via hydrolysis. Macroscopic patterning through a photomask via UV irradiation, hydrolysis, sequential zinc cation loading and calcination left a nanodot array of zinc oxide on a selectively UV-shaded region.

  18. Human Topological Task Adapted for Rats: Spatial Information Processes of the Parietal Cortex

    PubMed Central

    Goodrich-Hunsaker, Naomi J.; Howard, Brian P.; Hunsaker, Michael R.; Kesner, Raymond P.

    2008-01-01

    Human research has shown that lesions of the parietal cortex disrupt spatial information processing, specifically topological information. Similar findings have been found in nonhumans. It has been difficult to determine homologies between human and non-human mnemonic mechanisms for spatial information processing because methodologies and neuropathology differ. The first objective of the present study was to adapt a previously established human task for rats. The second objective was to better characterize the role of parietal cortex (PC) and dorsal hippocampus (dHPC) for topological spatial information processing. Rats had to distinguish whether a ball inside a ring or a ball outside a ring was the correct, rewarded object. After rats reached criterion on the task (>95%) they were randomly assigned to a lesion group (control, PC, dHPC). Animals were then re-tested. Post-surgery data show that controls were 94% correct on average, dHPC rats were 89% correct on average, and PC rats were 56% correct on average. The results from the present study suggest that the parietal cortex, but not the dHPC processes topological spatial information. The present data are the first to support comparable topological spatial information processes of the parietal cortex in humans and rats. PMID:18571941

  19. Phonon processes in vertically aligned silicon nanowire arrays produced by low-cost all-solution galvanic displacement method

    NASA Astrophysics Data System (ADS)

    Banerjee, Debika; Trudeau, Charles; Gerlein, Luis Felipe; Cloutier, Sylvain G.

    2016-03-01

    The nanoscale engineering of silicon can significantly change its bulk optoelectronic properties to make it more favorable for device integration. Phonon process engineering is one way to enhance inter-band transitions in silicon's indirect band structure alignment. This paper demonstrates phonon localization at the tip of silicon nanowires fabricated by galvanic displacement using wet electroless chemical etching of a bulk silicon wafer. High-resolution Raman micro-spectroscopy reveals that such arrayed structures of silicon nanowires display phonon localization behaviors, which could help their integration into the future generations of nano-engineered silicon nanowire-based devices such as photodetectors and solar cells.

  20. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to

  1. Spectral Doppler estimation utilizing 2-D spatial information and adaptive signal processing.

    PubMed

    Ekroll, Ingvild K; Torp, Hans; Løvstakken, Lasse

    2012-06-01

    The trade-off between temporal and spectral resolution in conventional pulsed wave (PW) Doppler may limit duplex/triplex quality and the depiction of rapid flow events. It is therefore desirable to reduce the required observation window (OW) of the Doppler signal while preserving the frequency resolution. This work investigates how the required observation time can be reduced by adaptive spectral estimation utilizing 2-D spatial information obtained by parallel receive beamforming. Four adaptive estimation techniques were investigated, the power spectral Capon (PSC) method, the amplitude and phase estimation (APES) technique, multiple signal classification (MUSIC), and a projection-based version of the Capon technique. By averaging radially and laterally, the required covariance matrix could successfully be estimated without temporal averaging. Useful PW spectra of high resolution and contrast could be generated from ensembles corresponding to those used in color flow imaging (CFI; OW = 10). For a given OW, the frequency resolution could be increased compared with the Welch approach, in cases in which the transit time was higher or comparable to the observation time. In such cases, using short or long pulses with unfocused or focused transmit, an increase in temporal resolution of up to 4 to 6 times could be obtained in in vivo examples. It was further shown that by using adaptive signal processing, velocity spectra may be generated without high-pass filtering the Doppler signal. With the proposed approach, spectra retrospectively calculated from CFI may become useful for unfocused as well as focused imaging. This application may provide new clinical information by inspection of velocity spectra simultaneously from several spatial locations. PMID:22711413

  2. Neuroelectric adaptations to cognitive processing in virtual environments: an exercise-related approach.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-04-01

    Recently, virtual environments (VEs) are suggested to encourage users to exercise regularly. The benefits of chronic exercise on cognitive performance are well documented in non-VE neurophysiological and behavioural studies. Based on event-related potentials (ERP) such as the N200 and P300, cognitive processing may be interpreted on a neuronal level. However, exercise-related neuroelectric adaptation in VE remains widely unclear and thus characterizes the primary aim of the present study. Twenty-two healthy participants performed active (moderate cycling exercise) and passive (no exercise) sessions in three VEs (control, front, surround), each generating a different sense of presence. Within sessions, conditions were randomly assigned, each lasting 5 min and including a choice reaction-time task to assess cognitive performance. According to the international 10:20 system, EEG with real-time triggered stimulus onset was recorded, and peaks of N200 and P300 components (amplitude, latency) were exported for analysis. Heart rate was recorded, and sense of presence assessed prior to and following each session and condition. Results revealed an increase in ERP amplitudes (N200: p < 0.001; P300: p < 0.001) and latencies (N200: p < 0.001) that were most pronounced over fronto-central and occipital electrode sites relative to an increased sense of presence (p < 0.001); however, ERP were not modulated by exercise (each p > 0.05). Hypothesized to mirror cognitive processing, decreases of cognitive performance's accuracy and reaction time failed significance. With respect to previous research, the present neuroelectric adaptation gives reason to believe in compensative neuronal resources that balance demanding cognitive processing in VE to avoid behavioural inefficiency. PMID:25630906

  3. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant. PMID:26321351

  4. Analysis of adaptive forward-backward diffusion flows with applications in image processing

    NASA Astrophysics Data System (ADS)

    Surya Prasath, V. B.; Urbano, José Miguel; Vorotnikov, Dmitry

    2015-10-01

    The nonlinear diffusion model introduced by Perona and Malik (1990 IEEE Trans. Pattern Anal. Mach. Intell. 12 629-39) is well suited to preserve salient edges while restoring noisy images. This model overcomes well-known edge smearing effects of the heat equation by using a gradient dependent diffusion function. Despite providing better denoizing results, the analysis of the PM scheme is difficult due to the forward-backward nature of the diffusion flow. We study a related adaptive forward-backward diffusion equation which uses a mollified inverse gradient term engrafted in the diffusion term of a general nonlinear parabolic equation. We prove a series of existence, uniqueness and regularity results for viscosity, weak and dissipative solutions for such forward-backward diffusion flows. In particular, we introduce a novel functional framework for wellposedness of flows of total variation type. A set of synthetic and real image processing examples are used to illustrate the properties and advantages of the proposed adaptive forward-backward diffusion flows.

  5. Does Variation in Genome Sizes Reflect Adaptive or Neutral Processes? New Clues from Passiflora

    PubMed Central

    Fonseca, Tamara C.; Salzano, Francisco M.; Bonatto, Sandro L.; Freitas, Loreta B.

    2011-01-01

    One of the long-standing paradoxes in genomic evolution is the observation that much of the genome is composed of repetitive DNA which has been typically regarded as superfluous to the function of the genome in generating phenotypes. In this work, we used comparative phylogenetic approaches to investigate if the variations in genome sizes (GS) should be considered as adaptive or neutral processes by the comparison between GS and flower diameters (FD) of 50 Passiflora species, more specifically, within its two most species-rich subgenera, Passiflora and Decaloba. For this, we have constructed a phylogenetic tree of these species, estimated GS and FD of them, inferred the tempo and mode of evolution of these traits and their correlations, using both current and phylogenetically independent contrasted values. We found significant correlations among the traits, when considering the complete set of data or only the subgenus Passiflora, whereas no correlations were observed within Decaloba. Herein, we present convincing evidence of adaptive evolution of GS, as well as clues that this pattern is limited by a minimum genome size, which could reduce both the possibilities of changes in GS and the possibility of phenotypic responses to environment changes. PMID:21464897

  6. Therapeutic adherence and competence scales for Developmentally Adapted Cognitive Processing Therapy for adolescents with PTSD

    PubMed Central

    Gutermann, Jana; Schreiber, Franziska; Matulis, Simone; Stangier, Ulrich; Rosner, Rita; Steil, Regina

    2015-01-01

    Background The assessment of therapeutic adherence and competence is often neglected in psychotherapy research, particularly in children and adolescents; however, both variables are crucial for the interpretation of treatment effects. Objective Our aim was to develop, adapt, and pilot two scales to assess therapeutic adherence and competence in a recent innovative program, Developmentally Adapted Cognitive Processing Therapy (D-CPT), for adolescents suffering from posttraumatic stress disorder (PTSD) after childhood abuse. Method Two independent raters assessed 30 randomly selected sessions involving 12 D-CPT patients (age 13–20 years, M age=16.75, 91.67% female) treated by 11 therapists within the pilot phase of a multicenter study. Results Three experts confirmed the relevance and appropriateness of each item. All items and total scores for adherence (intraclass correlation coefficients [ICC]=0.76–1.00) and competence (ICC=0.78–0.98) yielded good to excellent inter-rater reliability. Cronbach's alpha was 0.59 for the adherence scale and 0.96 for the competence scale. Conclusions The scales reliably assess adherence and competence in D-CPT for adolescent PTSD patients. The ratings can be helpful in the interpretation of treatment effects, the assessment of mediator variables, and the identification and training of therapeutic skills that are central to achieving good treatment outcomes. Both adherence and competence will be assessed as possible predictor variables for treatment success in future D-CPT trials. PMID:25791915

  7. Application of an adaptive plan to the configuration of nonlinear image-processing algorithms

    NASA Astrophysics Data System (ADS)

    Chu, Chee-Hung H.

    1990-07-01

    The application of an adaptive plan to the design of a class of nonlinear digital image processing operators known as stack filters is presented in this paper. The adaptive plan is based on the mechanics found in genetics and natural selection. Such learning mechanisms have become known as genetic algorithms. A stack filter is characterized by the coefficients of its underlying positive Boolean function. This set of coefficients constitute a binary string, referred to as a chromosome in a genetic algorithm, that represents that particular filter configuration. A fitness value for each chromosome is computed based on the performance of the associated filter in specific tasks such as noise suppression. A population of chromosomes is maintained by the genetic algorithm, and new generations are formed by selecting mating pairs based on their fitness values. Genetic operators such as crossover or mutation are applied to the mating pairs to form offsprings. By exchanging some substrings of the two parent-chromosomes, the crossover operator can bring different blocks of genes that result in good performance together into one chromosome that yields the best performance. Empirical results show that this method is capable of configuring stack filters that are effective in impulsive noise suppression.

  8. Tug-of-war between driver and passenger mutations in cancer and other adaptive processes.

    PubMed

    McFarland, Christopher D; Mirny, Leonid A; Korolev, Kirill S

    2014-10-21

    Cancer progression is an example of a rapid adaptive process where evolving new traits is essential for survival and requires a high mutation rate. Precancerous cells acquire a few key mutations that drive rapid population growth and carcinogenesis. Cancer genomics demonstrates that these few driver mutations occur alongside thousands of random passenger mutations--a natural consequence of cancer's elevated mutation rate. Some passengers are deleterious to cancer cells, yet have been largely ignored in cancer research. In population genetics, however, the accumulation of mildly deleterious mutations has been shown to cause population meltdown. Here we develop a stochastic population model where beneficial drivers engage in a tug-of-war with frequent mildly deleterious passengers. These passengers present a barrier to cancer progression describable by a critical population size, below which most lesions fail to progress, and a critical mutation rate, above which cancers melt down. We find support for this model in cancer age-incidence and cancer genomics data that also allow us to estimate the fitness advantage of drivers and fitness costs of passengers. We identify two regimes of adaptive evolutionary dynamics and use these regimes to understand successes and failures of different treatment strategies. A tumor's load of deleterious passengers can explain previously paradoxical treatment outcomes and suggest that it could potentially serve as a biomarker of response to mutagenic therapies. The collective deleterious effect of passengers is currently an unexploited therapeutic target. We discuss how their effects might be exacerbated by current and future therapies. PMID:25277973

  9. Adaptive Integrated Optical Bragg Grating in Semiconductor Waveguide Suitable for Optical Signal Processing

    NASA Astrophysics Data System (ADS)

    Moniem, T. A.

    2016-05-01

    This article presents a methodology for an integrated Bragg grating using an alloy of GaAs, AlGaAs, and InGaAs with a controllable refractive index to obtain an adaptive Bragg grating suitable for many applications on optical processing and adaptive control systems, such as limitation and filtering. The refractive index of a Bragg grating is controlled by using an external electric field for controlling periodic modulation of the refractive index of the active waveguide region. The designed Bragg grating has refractive indices programmed by using that external electric field. This article presents two approaches for designing the controllable refractive indices active region of a Bragg grating. The first approach is based on the modification of a planar micro-strip structure of the iGaAs traveling wave as the active region, and the second is based on the modification of self-assembled InAs/GaAs quantum dots of an alloy from GaAs and InGaAs with a GaP traveling wave. The overall design and results are discussed through numerical simulation by using the finite-difference time-domain, plane wave expansion, and opto-wave simulation methods to confirm its operation and feasibility.

  10. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  11. Primary Dendrite Array: Observations from Ground-Based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Grugel, Richard N.; Erdman, Robert G.; Poirier, David R.

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST). Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K per centimeter (MICAST6) and 28 K per centimeter (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 millimeters per second) and a speed decrease (MICAST7-from 20 to 10 millimeters per second). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  12. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  13. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    SciTech Connect

    Wlodarczyk, Krystian L. Maier, Robert R. J.; Hand, Duncan P.; Bryce, Emma; Hutson, David; Kirk, Katherine; Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil; Strachan, Mel

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  14. Usability of clinical decision support system as a facilitator for learning the assistive technology adaptation process.

    PubMed

    Danial-Saad, Alexandra; Kuflik, Tsvi; Weiss, Patrice L Tamar; Schreuer, Naomi

    2016-01-01

    The aim of this study was to evaluate the usability of Ontology Supported Computerized Assistive Technology Recommender (OSCAR), a Clinical Decision Support System (CDSS) for the assistive technology adaptation process, its impact on learning the matching process, and to determine the relationship between its usability and learnability. Two groups of expert and novice clinicians (total, n = 26) took part in this study. Each group filled out system usability scale (SUS) to evaluate OSCAR's usability. The novice group completed a learning questionnaire to assess OSCAR's effect on their ability to learn the matching process. Both groups rated OSCAR's usability as "very good", (M [SUS] = 80.7, SD = 11.6, median = 83.7) by the novices, and (M [SUS] = 81.2, SD = 6.8, median = 81.2) by the experts. The Mann-Whitney results indicated that no significant differences were found between the expert and novice groups in terms of OSCAR's usability. A significant positive correlation existed between the usability of OSCAR and the ability to learn the adaptation process (rs = 0.46, p = 0.04). Usability is an important factor in the acceptance of a system. The successful application of user-centered design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically in developing other systems. Implications for Rehabilitation Creating a CDSS with a focus on its usability is an important factor for its acceptance by its users. Successful usability outcomes can impact the learning process of the subject matter in general, and the AT prescription process in particular. The successful application of User-Centered Design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically. The study emphasizes the importance of close collaboration between the developers and

  15. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  16. From spin noise to systematics: stochastic processes in the first International Pulsar Timing Array data release

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.

    2016-05-01

    We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.

  17. Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation

    PubMed Central

    Ferraro, Jeffrey P; Daumé, Hal; DuVall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J

    2013-01-01

    Objective Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Methods Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. Results The evaluated POS taggers drop in accuracy by 8.5–15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3–91.0% on clinical texts. ClinAdapt reports 93.2–93.9%. Conclusions ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks. PMID:23486109

  18. Infrared Astronomy with Arrays: The Next Generation; Sunset Village, Los Angeles, CA, Oct. 1993

    NASA Technical Reports Server (NTRS)

    Mclean, Ian S.

    1994-01-01

    Conference papers on infrared array techniques and methods for infrared astronomy are presented. Topics covered include the following: infrared telescopes; infrared spectrometers; spaceborne astronomy; astronomical observatories; infrared cameras; imaging techniques; sky surveys; infrared photography; infrared photometry; infrared spectroscopy; equipment specifications; data processing and analysis; control systems; cryogenic equipment; adaptive optics; image resolution; infrared detector materials; and focal plane arrays.

  19. Modeling soil processes for adapting agricultural systems to climate variability and change

    NASA Astrophysics Data System (ADS)

    Basso, B.

    2014-12-01

    Climate change, drought, and agricultural intensification are increasing the demand for enhanced resource use efficiency (water, nitrogen and radiation). There is a global consensus between climate and agricultural scientists about the need to quantify the likely impacts of climate change on crop yields due to their significant consequences on food prices as well as the global economy. Crop models have been extensively tested for yields, but their validation for soil water balance, and carbon and nitrogen cycling in agricultural systems has been limited. The objective of this research is to illustrate the importance of modeling soil processes correctly to identify management strategy that allow cropping systems to adapt to climate variability and change. Results from the first phase of the AgMIP soil and crop rotation initiative will also be discussed.

  20. Nonlinear structural response using adaptive dynamic relaxation on a massively-parallel-processing system

    NASA Technical Reports Server (NTRS)

    Oakley, David R.; Knight, Norman F., Jr.

    1994-01-01

    A parallel adaptive dynamic relaxation (ADR) algorithm has been developed for nonlinear structural analysis. This algorithm has minimal memory requirements, is easily parallelizable and scalable to many processors, and is generally very reliable and efficient for highly nonlinear problems. Performance evaluations on single-processor computers have shown that the ADR algorithm is reliable and highly vectorizable, and that it is competitive with direct solution methods for the highly nonlinear problems considered. The present algorithm is implemented on the 512-processor Intel Touchstone DELTA system at Caltech, and it is designed to minimize the extent and frequency of interprocessor communication. The algorithm has been used to solve for the nonlinear static response of two and three dimensional hyperelastic systems involving contact. Impressive relative speedups have been achieved and demonstrate the high scalability of the ADR algorithm. For the class of problems addressed, the ADR algorithm represents a very promising approach for parallel-vector processing.

  1. Riemannian mean and space-time adaptive processing using projection and inversion algorithms

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Barbaresco, Frédéric

    2013-05-01

    The estimation of the covariance matrix from real data is required in the application of space-time adaptive processing (STAP) to an airborne ground moving target indication (GMTI) radar. A natural approach to estimation of the covariance matrix that is based on the information geometry has been proposed. In this paper, the output of the Riemannian mean is used in inversion and projection algorithms. It is found that the projection class of algorithms can yield very significant gains, even when the gains due to inversion-based algorithms are marginal over standard algorithms. The performance of the projection class of algorithms does not appear to be overly sensitive to the projected subspace dimension.

  2. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    NASA Astrophysics Data System (ADS)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  3. Molecular Mechanisms Mediating the Adaptive Regulation of Intestinal Riboflavin Uptake Process

    PubMed Central

    Subramanian, Veedamali S.; Ghosal, Abhisek; Kapadia, Rubina; Nabokina, Svetlana M.; Said, Hamid M.

    2015-01-01

    The intestinal absorption process of vitamin B2 (riboflavin, RF) is carrier-mediated, and all three known human RF transporters, i.e., hRFVT-1, -2, and -3 (products of the SLC52A1, 2 & 3 genes, respectively) are expressed in the gut. We have previously shown that the intestinal RF uptake process is adaptively regulated by substrate level, but little is known about the molecular mechanism(s) involved. Using human intestinal epithelial NCM460 cells maintained under RF deficient and over-supplemented (OS) conditions, we now show that the induction in RF uptake in RF deficiency is associated with an increase in expression of the hRFVT-2 & -3 (but not hRFVT-1) at the protein and mRNA levels. Focusing on hRFVT-3, the predominant transporter in the intestine, we also observed an increase in the level of expression of its hnRNA and activity of its promoter in the RF deficiency state. An increase in the level of expression of the nuclear factor Sp1 (which is important for activity of the SLC52A3 promoter) was observed in RF deficiency, while mutating the Sp1/GC site in the SLC52A3 promoter drastically decreased the level of induction in SLC52A3 promoter activity in RF deficiency. We also observed specific epigenetic changes in the SLC52A3 promoter in RF deficiency. Finally, an increase in hRFVT-3 protein expression at the cell surface was observed in RF deficiency. Results of these investigations show, for the first time, that transcriptional and post-transcriptional mechanisms are involved in the adaptive regulation of intestinal RF uptake by the prevailing substrate level. PMID:26121134

  4. Source Depth Estimation Using a Horizontal Array by Matched-Mode Processing in the Frequency-Wavenumber Domain

    NASA Astrophysics Data System (ADS)

    Nicolas, Barbara; Mars, Jérôme I.; Lacoume, Jean-Louis

    2006-12-01

    In shallow water environments, matched-field processing (MFP) and matched-mode processing (MMP) are proven techniques for doing source localization. In these environments, the acoustic field propagates at long range as depth-dependent modes. Given a knowledge of the modes, it is possible to estimate source depth. In MMP, the pressure field is typically sampled over depth with a vertical line array (VLA) in order to extract the mode amplitudes. In this paper, we focus on horizontal line arrays (HLA) as they are generally more practical for at sea applications. Considering an impulsive low-frequency source (1-100 Hz) in a shallow water environment (100-400 m), we propose an efficient method to estimate source depth by modal decomposition of the pressure field recorded on an HLA of sensors. Mode amplitudes are estimated using the frequency-wavenumber transform, which is the 2D Fourier transform of a time-distance section. We first study the robustness of the presented method against noise and against environmental mismatches on simulated data. Then, the method is applied both to at sea and laboratory data. We also show that the source depth estimation is drastically improved by incorporating the sign of the mode amplitudes.

  5. Adaptive step ODE algorithms for the 3D simulation of electric heart activity with graphics processing units.

    PubMed

    Garcia-Molla, V M; Liberos, A; Vidal, A; Guillem, M S; Millet, J; Gonzalez, A; Martinez-Zaldivar, F J; Climent, A M

    2014-01-01

    In this paper we studied the implementation and performance of adaptive step methods for large systems of ordinary differential equations systems in graphics processing units, focusing on the simulation of three-dimensional electric cardiac activity. The Rush-Larsen method was applied in all the implemented solvers to improve efficiency. We compared the adaptive methods with the fixed step methods, and we found that the fixed step methods can be faster while the adaptive step methods are better in terms of accuracy and robustness. PMID:24377685

  6. System and method for cognitive processing for data fusion

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor)

    2012-01-01

    A system and method for cognitive processing of sensor data. A processor array receiving analog sensor data and having programmable interconnects, multiplication weights, and filters provides for adaptive learning in real-time. A static random access memory contains the programmable data for the processor array and the stored data is modified to provide for adaptive learning.

  7. Power and Performance Trade-offs for Space Time Adaptive Processing

    SciTech Connect

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-07-27

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementation on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.

  8. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  9. Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter

    NASA Technical Reports Server (NTRS)

    Aggarwal, Pravin; Hull, Patrick V.

    2015-01-01

    Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.

  10. Phoneme restoration and empirical coverage of Interactive Activation and Adaptive Resonance models of human speech processing.

    PubMed

    Grossberg, Stephen; Kazerounian, Sohrob

    2016-08-01

    Magnuson [J. Acoust. Soc. Am. 137, 1481-1492 (2015)] makes claims for Interactive Activation (IA) models and against Adaptive Resonance Theory (ART) models of speech perception. Magnuson also presents simulations that claim to show that the TRACE model can simulate phonemic restoration, which was an explanatory target of the cARTWORD ART model. The theoretical analysis and review herein show that these claims are incorrect. More generally, the TRACE and cARTWORD models illustrate two diametrically opposed types of neural models of speech and language. The TRACE model embodies core assumptions with no analog in known brain processes. The cARTWORD model defines a hierarchy of cortical processing regions whose networks embody cells in laminar cortical circuits as part of the paradigm of laminar computing. cARTWORD further develops ART speech and language models that were introduced in the 1970s. It builds upon Item-Order-Rank working memories, which activate learned list chunks that unitize sequences to represent phonemes, syllables, and words. Psychophysical and neurophysiological data support Item-Order-Rank mechanisms and contradict TRACE representations of time, temporal order, silence, and top-down processing that exhibit many anomalous properties, including hallucinations of non-occurring future phonemes. Computer simulations of the TRACE model are presented that demonstrate these failures. PMID:27586743

  11. Adaptive-weighted bilateral filtering and other pre-processing techniques for optical coherence tomography.

    PubMed

    Anantrasirichai, N; Nicholson, Lindsay; Morgan, James E; Erchova, Irina; Mortlock, Katie; North, Rachel V; Albon, Julie; Achim, Alin

    2014-09-01

    This paper presents novel pre-processing image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, we propose a novel method to remove speckle, while preserving useful information contained in each retinal layer. The process starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). We further enhance the OCT image through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in the DT-CWT domain. Finally we describe an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion. PMID:25034317

  12. VLSI processor with a configurable processing element array for balanced feature extraction in high-resolution images

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbo; Shibata, Tadashi

    2014-01-01

    A VLSI processor employing a configurable processing element array (PEA) is developed for a newly proposed balanced feature extraction algorithm. In the algorithm, the input image is divided into square regions and the number of features is determined by noise effect analysis in each region. Regions of different sizes are used according to the resolutions and contents of input images. Therefore, inside the PEA, processing elements are hierarchically grouped for feature extraction in regions of different sizes. A proof-of-concept chip is fabricated using a 0.18 µm CMOS technology with a 32 × 32 PEA. From measurement results, a speed of 7.5 kfps is achieved for feature extraction in 128 × 128 pixel regions when operating the chip at 45 MHz, and a speed of 55 fps is also achieved for feature extraction in 1920 × 1080 pixel images.

  13. Apparatus for measuring local stress of metallic films, using an array of parallel laser beams during rapid thermal processing

    NASA Astrophysics Data System (ADS)

    Huang, R.; Taylor, C. A.; Himmelsbach, S.; Ceric, H.; Detzel, T.

    2010-05-01

    The novel apparatus described here was developed to investigate the thermo-mechanical behavior of metallic films on a substrate by acquiring the wafer curvature. It comprises an optical module producing and measuring an array of parallel laser beams, a high resolution scanning stage, a rapid thermal processing (RTP) chamber and several accessorial gas control modules. Unlike most traditional systems which only calculate the average wafer curvature, this system has the capability to measure the curvature locally in 30 ms. Consequently, the real-time development of biaxial stress involved in thin films can be fully captured during any thermal treatments such as temperature cycling or annealing processes. In addition, the multiple parallel laser beam technique cancels electrical, vibrational and other random noise sources that would otherwise make an in situ measurement very difficult. Furthermore, other advanced features such as the in situ acid treatment and active cooling extend the experimental conditions to provide new insights into thin film properties and material behavior.

  14. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  15. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  16. Matched Bearing Processing for Airborne Source Localization by an Underwater Horizontal Line Array

    NASA Astrophysics Data System (ADS)

    Peng, Zhao-Hui; Li, Zheng-Lin; Wang, Guang-Xu

    2010-11-01

    Location of an airborne source is estimated from signals measured by a horizontal line array (HLA), based on the fact that a signal transmitted by an airborne source will reach a underwater hydrophone in different ways: via a direct refracted path, via one or more bottom and surface reflections, via the so-called lateral wave. As a result, when an HLA near the airborne source is used for beamforming, several peaks at different bearing angles will appear. By matching the experimental beamforming outputs with the predicted outputs for all source locations, the most likely location is the one which gives minimum difference. An experiment is conducted for airborne source localization in the Yellow Sea in October 2008. An HLA was laid on the sea bottom at the depth of 30m. A high-power loudspeaker was hung on a research ship floating near the HLA and sent out LFM pulses. The estimated location of the loudspeaker is in agreement well with the GPS measurements.

  17. Elastomeric inverse moulding and vacuum casting process characterization for the fabrication of arrays of concave refractive microlenses

    NASA Astrophysics Data System (ADS)

    Desmet, L.; Van Overmeire, S.; Van Erps, J.; Ottevaere, H.; Debaes, C.; Thienpont, H.

    2007-01-01

    We present a complete and precise quantitative characterization of the different process steps used in an elastomeric inverse moulding and vacuum casting technique. We use the latter replication technique to fabricate concave replicas from an array of convex thermal reflow microlenses. During the inverse elastomeric moulding we obtain a secondary silicone mould of the original silicone mould in which the master component is embedded. Using vacuum casting, we are then able to cast out of the second mould several optical transparent poly-urethane arrays of concave refractive microlenses. We select ten particular representative microlenses on the original, the silicone moulds and replica sample and quantitatively characterize and statistically compare them during the various fabrication steps. For this purpose, we use several state-of-the-art and ultra-precise characterization tools such as a stereo microscope, a stylus surface profilometer, a non-contact optical profilometer, a Mach-Zehnder interferometer, a Twyman-Green interferometer and an atomic force microscope to compare various microlens parameters such as the lens height, the diameter, the paraxial focal length, the radius of curvature, the Strehl ratio, the peak-to-valley and the root-mean-square wave aberrations and the surface roughness. When appropriate, the microlens parameter under test is measured with several different measuring tools to check for consistency in the measurement data. Although none of the lens samples shows diffraction-limited performance, we prove that the obtained replicated arrays of concave microlenses exhibit sufficiently low surface roughness and sufficiently high lens quality for various imaging applications.

  18. Investigation of proposed process sequence for the array automated assembly task, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-01-01

    Progress was made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The processes that were retained for the selected process sequence, spin-on diffusion, print and fire aluminum p+ back, clean, print and fire silver front contact and apply tin pad to aluminum back, were evaluated for their cost contribution.

  19. LSSA (Low-cost Silicon Solar Array) project

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Methods are explored for economically generating electrical power to meet future requirements. The Low-Cost Silicon Solar Array Project (LSSA) was established to reduce the price of solar arrays by improving manufacturing technology, adapting mass production techniques, and promoting user acceptance. The new manufacturing technology includes the consideration of new silicon refinement processes, silicon sheet growth techniques, encapsulants, and automated assembly production being developed under contract by industries and universities.

  20. Motor learning and cross-limb transfer rely upon distinct neural adaptation processes.

    PubMed

    Stöckel, Tino; Carroll, Timothy J; Summers, Jeffery J; Hinder, Mark R

    2016-08-01

    Performance benefits conferred in the untrained limb after unilateral motor practice are termed cross-limb transfer. Although the effect is robust, the neural mechanisms remain incompletely understood. In this study we used noninvasive brain stimulation to reveal that the neural adaptations that mediate motor learning in the trained limb are distinct from those that underlie cross-limb transfer to the opposite limb. Thirty-six participants practiced a ballistic motor task with their right index finger (150 trials), followed by intermittent theta-burst stimulation (iTBS) applied to the trained (contralateral) primary motor cortex (cM1 group), the untrained (ipsilateral) M1 (iM1 group), or the vertex (sham group). After stimulation, another 150 training trials were undertaken. Motor performance and corticospinal excitability were assessed before motor training, pre- and post-iTBS, and after the second training bout. For all groups, training significantly increased performance and excitability of the trained hand, and performance, but not excitability, of the untrained hand, indicating transfer at the level of task performance. The typical facilitatory effect of iTBS on MEPs was reversed for cM1, suggesting homeostatic metaplasticity, and prior performance gains in the trained hand were degraded, suggesting that iTBS interfered with learning. In stark contrast, iM1 iTBS facilitated both performance and excitability for the untrained hand. Importantly, the effects of cM1 and iM1 iTBS on behavior were exclusive to the hand contralateral to stimulation, suggesting that adaptations within the untrained M1 contribute to cross-limb transfer. However, the neural processes that mediate learning in the trained hemisphere vs. transfer in the untrained hemisphere appear distinct. PMID:27169508