Science.gov

Sample records for adaptive fft architecture

  1. A High-Throughput, Adaptive FFT Architecture for FPGA-Based Space-Borne Data Processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Kayla; Zheng, Jason; He, Yutao; Shah, Biren

    2010-01-01

    Historically, computationally-intensive data processing for space-borne instruments has heavily relied on ground-based computing resources. But with recent advances in functional densities of Field-Programmable Gate-Arrays (FPGAs), there has been an increasing desire to shift more processing on-board; therefore relaxing the downlink data bandwidth requirements. Fast Fourier Transforms (FFTs) are commonly used building blocks for data processing applications, with a growing need to increase the FFT block size. Many existing FFT architectures have mainly emphasized on low power consumption or resource usage; but as the block size of the FFT grows, the throughput is often compromised first. In addition to power and resource constraints, space-borne digital systems are also limited to a small set of space-qualified memory elements, which typically lag behind the commercially available counterparts in capacity and bandwidth. The bandwidth limitation of the external memory creates a bottleneck for a large, high-throughput FFT design with large block size. In this paper, we present the Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture for a moderately large block size (32K) with considerations to power consumption and resource usage, as well as throughput. We will also show that the architecture can be easily adapted for different FFT block sizes with different throughput and power requirements. The result is completely contained within an FPGA without relying on external memories. Implementation results are summarized.

  2. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the

  3. FFT Computation with Systolic Arrays, A New Architecture

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.

  4. A Novel Adaptive Frequency Estimation Algorithm Based on Interpolation FFT and Improved Adaptive Notch Filter

    NASA Astrophysics Data System (ADS)

    Shen, Ting-ao; Li, Hua-nan; Zhang, Qi-xin; Li, Ming

    2017-02-01

    The convergence rate and the continuous tracking precision are two main problems of the existing adaptive notch filter (ANF) for frequency tracking. To solve the problems, the frequency is detected by interpolation FFT at first, which aims to overcome the convergence rate of the ANF. Then, referring to the idea of negative feedback, an evaluation factor is designed to monitor the ANF parameters and realize continuously high frequency tracking accuracy. According to the principle, a novel adaptive frequency estimation algorithm based on interpolation FFT and improved ANF is put forward. Its basic idea, specific measures and implementation steps are described in detail. The proposed algorithm obtains a fast estimation of the signal frequency, higher accuracy and better universality qualities. Simulation results verified the superiority and validity of the proposed algorithm when compared with original algorithms.

  5. A Study on Adapting the Zoom FFT Algorithm to Automotive Millimetre Wave Radar

    NASA Astrophysics Data System (ADS)

    Kuroda, Hiroshi; Takano, Kazuaki

    The millimetre wave radar has been developed for automotive application such as ACC (Adaptive Cruise Control) and CWS (Collision Warning System). The radar uses MMIC (Monolithic Microwave Integrated Circuits) devices for transmitting and receiving 76 GHz millimetre wave signals. The radar is FSK (Frequency Shift Keying) monopulse type. The radar transmits 2 frequencies in time-duplex manner, and measures distance and relative speed of targets. The monopulse feature detects the azimuth angle of targets without a scanning mechanism. The Zoom FFT (Fast Fourier Transform) algorithm, which analyses frequency domain precisely, has adapted to the radar for discriminating multiple stationary targets. The Zoom FFT algorithm is evaluated in test truck. The evaluation results show good performance on discriminating two stationary vehicles in host lane and adjacent lane.

  6. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  7. [The realization of tracking power-line interference adaptive coherent model base on part FFT].

    PubMed

    Yu, Chao; Li, Gang; Lin, Ling

    2007-08-01

    Adaptive coherent model can be easily implemented and it can simultaneously reject the power-line interference and baseline wander. But the relevancy of the filter's bandwidth at low-frequency and power-line interference limits its application on ECG filtering. A part fast Fourier transform (FFT) algorithm is presented. It is used to track power-line frequency and adjust the sample frequency. Experiments show that the method still rejects interference efficiently despite of the slow fluctuation of power-line frequency.

  8. Performance Comparison Between Adaptive Line Enhancers and FFT for Fast Carrier Acquisition

    NASA Technical Reports Server (NTRS)

    Yeh, H-G.; Nguyen, T.

    1995-01-01

    Three adaptive line enhancer (ALE) algorithms and architectures, namely conventional ALE, ALE with Double Filtering (ALEDF), and ALE with Coherent Accumulation (ALECA) are investigated for fast carrier acquisition in time-domain.

  9. Architecture for Adaptive Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Hayes-Roth, Barbara

    1993-01-01

    We identify a class of niches to be occupied by 'adaptive intelligent systems (AISs)'. In contrast with niches occupied by typical AI agents, AIS niches present situations that vary dynamically along several key dimensions: different combinations of required tasks, different configurations of available resources, contextual conditions ranging from benign to stressful, and different performance criteria. We present a small class hierarchy of AIS niches that exhibit these dimensions of variability and describe a particular AIS niche, ICU (intensive care unit) patient monitoring, which we use for illustration throughout the paper. We have designed and implemented an agent architecture that supports all of different kinds of adaptation by exploiting a single underlying theoretical concept: An agent dynamically constructs explicit control plans to guide its choices among situation-triggered behaviors. We illustrate the architecture and its support for adaptation with examples from Guardian, an experimental agent for ICU monitoring.

  10. Adaptive reconfigurable distributed sensor architecture

    NASA Astrophysics Data System (ADS)

    Akey, Mark L.

    1997-07-01

    The infancy of unattended ground based sensors is quickly coming to an end with the arrival of on-board GPS, networking, and multiple sensing capabilities. Unfortunately, their use is only first-order at best: GPS assists with sensor report registration; networks push sensor reports back to the warfighter and forwards control information to the sensors; multispectral sensing is a preset, pre-deployment consideration; and the scalability of large sensor networks is questionable. Current architectures provide little synergy among or within the sensors either before or after deployment, and do not map well to the tactical user's organizational structures and constraints. A new distributed sensor architecture is defined which moves well beyond single sensor, single task architectures. Advantages include: (1) automatic mapping of tactical direction to multiple sensors' tasks; (2) decentralized, distributed management of sensor resources and tasks; (3) software reconfiguration of deployed sensors; (4) network scalability and flexibility to meet the constraints of tactical deployments, and traditional combat organizations and hierarchies; and (5) adaptability to new battlefield communication paradigms such as BADD (Battlefield Analysis and Data Dissemination). The architecture is supported in two areas: a recursive, structural definition of resource configuration and management via loose associations; and a hybridization of intelligent software agents with tele- programming capabilities. The distributed sensor architecture is examined within the context of air-deployed ground sensors with acoustic, communication direction finding, and infra-red capabilities. Advantages and disadvantages of the architecture are examined. Consideration is given to extended sensor life (up to 6 months), post-deployment sensor reconfiguration, limited on- board sensor resources (processor and memory), and bandwidth. It is shown that technical tasking of the sensor suite can be automatically

  11. Efficient architecture for a multichannel array subbanding system with adaptive processing

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Nguyen, Huy T.

    2000-11-01

    An architecture is presented for front-end processing in a wideband array system which samples real signals. Such a system may be encountered in cellular telephony, radar, or low SNR digital communications receivers. The subbanding of data enables system data rate reduction, and creates a narrowband condition for adaptive processing within the subbands. The front-end performs passband filtering, equalization, subband decomposition and adaptive beamforming. The subbanding operation is efficiently implemented using a prototype lowpass finite impulse response (FIR) filter, decomposed into polyphase form, combined with a Fast Fourier Transform (FFT) block and a bank of modulating postmultipliers. If the system acquires real inputs, a single FFT may be used to operate on two channels, but a channel separation network is then required for recovery of individual channel data. A sequence of steps is described based on data transformation techniques that enables a maximally efficient implementation of the processing stages and eliminates the need for channel separation. Operation count is reduced, and several layers of processing are eliminated.

  12. Self-Powered Adaptive Switched Architecture Storage

    NASA Astrophysics Data System (ADS)

    El Mahboubi, F.; Bafleur, M.; Boitier, V.; Alvarez, A.; Colomer, J.; Miribel, P.; Dilhac, J.-M.

    2016-11-01

    Ambient energy harvesting coupled to storage is a way to improve the autonomy of wireless sensors networks. Moreover, in some applications with harsh environment or when a long service lifetime is required, the use of batteries is prohibited. Ultra-capacitors provide in this case a good alternative for energy storage. Such storage must comply with the following requirements: a sufficient voltage during the initial charge must be rapidly reached, a significant amount of energy should be stored and the unemployed residual energy must be minimised at discharge. To answer these apparently contradictory criteria, we propose a selfadaptive switched architecture consisting of a matrix of switched ultra-capacitors. We present the results of a self-powered adaptive prototype that shows the improvement in terms of charge time constant, energy utilization rate and then energy autonomy.

  13. Generalized FFT Beamsteering

    DTIC Science & Technology

    2008-01-01

    subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE...on a 2D lattice, offers an electronically controlled , agile beam or even multiple beams associated with mul- tiple antenna outputs, a technology...versions both derive fundamentally from the same ideas in elementary group theory . Likewise, classic FFT algo- rithms and the generalized FFT

  14. RAINBOW: Architecture-Based Adaptation of Complex Systems

    DTIC Science & Technology

    2005-04-01

    architectural level. The second problem is to translate architectural repairs into actual system changes. To do this we write a simple table-driven...bandwidth, regardless of the adaptation. Similarly, it is possible to use general probe technology to ameliorate the task of writing probes for particular...it is possible to use existing technologies like ProbeMeister [21] to generate the actual probes, without writing any additional code. 2.3

  15. Recursive architecture for large-scale adaptive system

    NASA Astrophysics Data System (ADS)

    Hanahara, Kazuyuki; Sugiyama, Yoshihiko

    1994-09-01

    'Large scale' is one of major trends in the research and development of recent engineering, especially in the field of aerospace structural system. This term expresses the large scale of an artifact in general, however, it also implies the large number of the components which make up the artifact in usual. Considering a large scale system which is especially used in remote space or deep-sea, such a system should be adaptive as well as robust by itself, because its control as well as maintenance by human operators are not easy due to the remoteness. An approach to realizing this large scale, adaptive and robust system is to build the system as an assemblage of components which are respectively adaptive by themselves. In this case, the robustness of the system can be achieved by using a large number of such components and suitable adaptation as well as maintenance strategies. Such a system gathers many research's interest and their studies such as decentralized motion control, configurating algorithm and characteristics of structural elements are reported. In this article, a recursive architecture concept is developed and discussed towards the realization of large scale system which consists of a number of uniform adaptive components. We propose an adaptation strategy based on the architecture and its implementation by means of hierarchically connected processing units. The robustness and the restoration from degeneration of the processing unit are also discussed. Two- and three-dimensional adaptive truss structures are conceptually designed based on the recursive architecture.

  16. Interacting Brain Modules for Memory: An Adaptive Representations Architecture

    DTIC Science & Technology

    2008-06-01

    acquired memories for autobiographical events, sometimes collectively called episodic memory (e.g. Squire, 1987; Squire et al., 2004), as well as...AFRL-RI-RS-TR-2008-177 Final Technical Report June 2008 INTERACTING BRAIN MODULES FOR MEMORY : AN ADAPTIVE REPRESENTATIONS...FOR MEMORY : AN ADAPTIVE REPRESENTATIONS ARCHITECTURE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA8750-05-2-0273 5c. PROGRAM ELEMENT NUMBER 62304F

  17. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  18. An Extensible, Lightweight Architecture for Adaptive J2EE Applications

    SciTech Connect

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2006-11-01

    Server applications with adaptive behaviors can adapt their functionality in response to environmental changes, and significantly reduce the on-going costs of system deployment and administration. However, developing adaptive server applications is challenging due to the complexity of server technologies and highly dynamic application environments. This paper presents an architecture framework, known as the Adaptive Server Framework (ASF). ASF provides a clear separation between the implementation of adaptive behaviors and the server application business logic. This means a server application can be cost effectively extended with programmable adaptive features through the definition and implementation of control components defined in ASF. Further-more, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the performance gains possible by adaptive behaviors and the low overhead introduced by ASF.

  19. A Holistic Management Architecture for Large-Scale Adaptive Networks

    DTIC Science & Technology

    2007-09-01

    MANAGEMENT ARCHITECTURE FOR LARGE-SCALE ADAPTIVE NETWORKS by Michael R. Clement September 2007 Thesis Advisor: Alex Bordetsky Second Reader...TECHNOLOGY MANAGEMENT from the NAVAL POSTGRADUATE SCHOOL September 2007 Author: Michael R. Clement Approved by: Dr. Alex ...achieve in life is by His will. Ad Majorem Dei Gloriam. To my parents, my family, and Caitlin: For supporting me, listening to me when I got

  20. Efficient architecture for adaptive directional lifting-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Yin, Zan; Zhang, Li; Shi, Guangming

    2010-07-01

    Adaptive direction lifting-based wavelet transform (ADL) has better performance than conventional lifting both in image compression and de-noising. However, no architecture has been proposed to hardware implement it because of its high computational complexity and huge internal memory requirements. In this paper, we propose a four-stage pipelined architecture for 2 Dimensional (2D) ADL with fast computation and high data throughput. The proposed architecture comprises column direction estimation, column lifting, row direction estimation and row lifting which are performed in parallel in a pipeline mode. Since the column processed data is transposed, the row processor can reuse the column processor which can decrease the design complexity. In the lifting step, predict and update are also performed in parallel. For an 8×8 image sub-block, the proposed architecture can finish the ADL forward transform within 78 clock cycles. The architecture is implemented on Xilinx Virtex5 device on which the frequency can achieve 367 MHz. The processed time is 212.5 ns, which can meet the request of real-time system.

  1. Chandra Automatic Processing Task Interface: An Adaptable System Architecture

    NASA Astrophysics Data System (ADS)

    Grier, J. D., Jr.; Plummer, D.

    2007-10-01

    The Chandra Automatic Processing Task Interface (CAPTAIN) is an operations interface to Chandra Automatic Processing (AP) that provides detail management and execution of the AP pipelines. In particular, this kind of management is used in Special Automatic Processing (SAP) where there is a need to select specific pipelines that require non-standard handling for reprocessing of a given data set. Standard AP currently contains approximately 200 pipelines with complex interactions between them. As AP has evolved over the life of the mission, so has the number and attributes of these pipelines. As a result, CAPTAIN provides a system architecture capable of managing and adapting to this evolving system. This adaptability has allowed CAPTAIN to also be used to initiate Chandra Source Catalog Automatic Processing (Level~3 AP) and positions it for use with future automatic processing systems. This paper describes the approach to the development of the CAPTAIN system architecture and the maintainable, extensible and reusable software architecture by which it is implemented.

  2. Adaptive intelligent systems for pHealth - an architectural approach.

    PubMed

    González, Carolina; Blobel, Bernd; López, Diego M

    2012-01-01

    Health systems around the globe, especially in developing countries, are facing the challenge of delivering effective, safe, and high quality public health and individualized health services independent of time and location, and with minimum of allocated resources (pHealth). In this context, health promotion and health education services are very important, especially in primary care settings. The objective of this paper is to describe the architecture of an adaptive intelligent system mainly developed to support education and training of citizens, but also of health professionals. The proposed architecture describes a system consisting of several agents that cooperatively interact to find and process tutoring materials to disseminate them to users (multi-agent system). A prototype is being implemented which includes medical students from the Medical Faculty at University of Cauca (Colombia). In the experimental process, the student´s learning style - detected with the Bayesian Model - is compared against the learning style obtained from a questioner (manual approach).

  3. The Genetic Architecture of Climatic Adaptation of Tropical Cattle

    PubMed Central

    Porto-Neto, Laercio R.; Reverter, Antonio; Prayaga, Kishore C.; Chan, Eva K. F.; Johnston, David J.; Hawken, Rachel J.; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S.; Bolormaa, Sunduimijid; Goddard, Michael E.; Burrow, Heather M.; Henshall, John M.; Lehnert, Sigrid A.; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity. PMID:25419663

  4. L1 adaptive output-feedback control architectures

    NASA Astrophysics Data System (ADS)

    Kharisov, Evgeny

    This research focuses on development of L 1 adaptive output-feedback control. The objective is to extend the L1 adaptive control framework to a wider class of systems, as well as obtain architectures that afford more straightforward tuning. We start by considering an existing L1 adaptive output-feedback controller for non-strictly positive real systems based on piecewise constant adaptation law. It is shown that L 1 adaptive control architectures achieve decoupling of adaptation from control, which leads to bounded away from zero time-delay and gain margins in the presence of arbitrarily fast adaptation. Computed performance bounds provide quantifiable performance guarantees both for system output and control signal in transient and steady state. A noticeable feature of the L1 adaptive controller is that its output behavior can be made close to the behavior of a linear time-invariant system. In particular, proper design of the lowpass filter can achieve output response, which almost scales for different step reference commands. This property is relevant to applications with human operator in the loop (for example: control augmentation systems of piloted aircraft), since predictability of the system response is necessary for adequate performance of the operator. Next we present applications of the L1 adaptive output-feedback controller in two different fields of engineering: feedback control of human anesthesia, and ascent control of a NASA crew launch vehicle (CLV). The purpose of the feedback controller for anesthesia is to ensure that the patient's level of sedation during surgery follows a prespecified profile. The L1 controller is enabled by anesthesiologist after he/she achieves sufficient patient sedation level by introducing sedatives manually. This problem formulation requires safe switching mechanism, which avoids controller initialization transients. For this purpose, we used an L1 adaptive controller with special output predictor initialization routine

  5. Architectures for parallel DSP-based adaptive optics feedback control

    NASA Astrophysics Data System (ADS)

    McCarthy, Daniel F.

    1999-11-01

    We have developed a digital image processing system for real-time digital image processing feedback control of adaptive optics systems and simulation of optical image processing algorithms. The system uses multi-computer architecture to capture data from an imaging device such as a charge coupled device camera, process the image data, and control a spatial light-modulator, typically a liquid crystal modulator or a micro-electro mechanical system. The system is a Windows NT Pentium-based system combined with a commercial off-the-shelf peripheral component interconnect bus multi-processor system. The multi-processor is based on the Analog Devices super Harvard architecture computer (SHARC) processor, and field programmable gate arrays (FPGAs). The SHARCs provide a scalable reconfigurable C language-based digital signal processing (DSP) development environment. The FPGAs are typically used as reprogrammable interface controllers designed to integrate several off-the- shelf and custom imagers and light modulators into the system. The FPGAs can also be used in concert with the SHARCs for implementation of application-specific high-speed DSP algorithms.

  6. Context adaptive binary arithmetic decoding on transport triggered architectures

    NASA Astrophysics Data System (ADS)

    Rouvinen, Joona; Jääskeläinen, Pekka; Rintaluoma, Tero; Silvén, Olli; Takala, Jarmo

    2008-02-01

    Video coding standards, such as MPEG-4, H.264, and VC1, define hybrid transform based block motion compensated techniques that employ almost the same coding tools. This observation has been a foundation for defining the MPEG Reconfigurable Multimedia Coding framework that targets to facilitate multi-format codec design. The idea is to send a description of the codec with the bit stream, and to reconfigure the coding tools accordingly on-the-fly. This kind of approach favors software solutions, and is a substantial challenge for the implementers of mobile multimedia devices that aim at high energy efficiency. In particularly as high definition formats are about to be required from mobile multimedia devices, variable length decoders are becoming a serious bottleneck. Even at current moderate mobile video bitrates software based variable length decoders swallow a major portion of the resources of a mobile processor. In this paper we present a Transport Triggered Architecture (TTA) based programmable implementation for Context Adaptive Binary Arithmetic de-Coding (CABAC) that is used e.g. in the main profile of H.264 and in JPEG2000. The solution can be used even for other variable length codes.

  7. Moho Modeling Using FFT Technique

    NASA Astrophysics Data System (ADS)

    Chen, Wenjin; Tenzer, Robert

    2017-03-01

    To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.

  8. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  9. Maize canopy architecture and adaptation to high plant density in long term selection programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grain yield since the 1930s has increased more than five-fold in large part due to improvements in adaptation to high plant density. Changes to plant architecture that associated with improved light interception have made a major contribution to improved adaptation to high plant density. Improved ...

  10. A reconfigurable ASIP for high-throughput and flexible FFT processing in SDR environment

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao

    2014-04-01

    This paper presents a high-throughput and reconfigurable processor for fast Fourier transformation (FFT) processing based on SDR methodology. It adopts application specific instruction-set (ASIP) and single instruction multiple data (SIMD) architecture to exploit the parallelism of butterfly operations in FFT algorithm. Moreover, a novel 3-dimension multi-bank memory is proposed for parallel conflict-free accesses. The overall throughput and power-efficiency are greatly enhanced by parallel and streamline processing. A test chip supporting 64~2048-point FFT is setup for experiment. Logic synthesis reveals a maximum clock frequency of 500MHz and an area of 0.49 mm2 for the processor's logic using a low power 45-nm technology, and the dynamic power estimation is about 96.6mW. Compared with previous works, our FFT ASIP achieves a higher energy-efficiency with relative low area cost.

  11. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions.

    PubMed

    Sandhu, Nitika; Raman, K Anitha; Torres, Rolando O; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-08-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity.

  12. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  13. Novel L1 neural network adaptive control architecture with guaranteed transient performance.

    PubMed

    Cao, Chengyu; Hovakimyan, Naira

    2007-07-01

    In this paper, we present a novel neural network (NN) adaptive control architecture with guaranteed transient performance. With this new architecture, both input and output signals of an uncertain nonlinear system follow a desired linear system during the transient phase, in addition to stable tracking. This new architecture uses a low-pass filter in the feedback loop, which consequently enables to enforce the desired transient performance by increasing the adaptation gain. For the guaranteed transient performance of both input and output signals of the uncertain nonlinear system, the L1 gain of a cascaded system, comprised of the low-pass filter and the closed-loop desired reference model, is required to be less than the inverse of the Lipschitz constant of the unknown nonlinearities in the system. The tools from this paper can be used to develop a theoretically justified verification and validation framework for NN adaptive controllers. Simulation results illustrate the theoretical findings.

  14. Adaptive kinetic-fluid solvers for heterogeneous computing architectures

    NASA Astrophysics Data System (ADS)

    Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir

    2015-12-01

    We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.

  15. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    SciTech Connect

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  16. The genetic architecture of climatic adaptation in tropical cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adaptation of global food systems to climate change is essential to feed the world in the future. Tropical cattle production, an important mainstay of profitability for farmers in the developing world, is dominated by conditions of heat, lack of water, poor quality feedstuffs, parasites, and tropica...

  17. FFT and cone-beam CT reconstruction on graphics hardware

    NASA Astrophysics Data System (ADS)

    Després, Philippe; Sun, Mingshan; Hasegawa, Bruce H.; Prevrhal, Sven

    2007-03-01

    Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines. Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented on the GPU and its performance compared to a CPU version. Different reconstruction strategies were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to 30, allowing 256 3 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical imaging.

  18. Wavelet-Based Adaptive Solvers on Multi-core Architectures for the Simulation of Complex Systems

    NASA Astrophysics Data System (ADS)

    Rossinelli, Diego; Bergdorf, Michael; Hejazialhosseini, Babak; Koumoutsakos, Petros

    We build wavelet-based adaptive numerical methods for the simulation of advection dominated flows that develop multiple spatial scales, with an emphasis on fluid mechanics problems. Wavelet based adaptivity is inherently sequential and in this work we demonstrate that these numerical methods can be implemented in software that is capable of harnessing the capabilities of multi-core architectures while maintaining their computational efficiency. Recent designs in frameworks for multi-core software development allow us to rethink parallelism as task-based, where parallel tasks are specified and automatically mapped into physical threads. This way of exposing parallelism enables the parallelization of algorithms that were considered inherently sequential, such as wavelet-based adaptive simulations. In this paper we present a framework that combines wavelet-based adaptivity with the task-based parallelism. We demonstrate good scaling performance obtained by simulating diverse physical systems on different multi-core and SMP architectures using up to 16 cores.

  19. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  20. Adaptive optics at Lick Observatory: System architecture and operations

    SciTech Connect

    Brase, J.M.; An, J.; Avicola, K.

    1994-03-01

    We will describe an adaptive optics system developed for the 1 meter Nickel and 3 meter Shane telescopes at Lick Observatory. Observing wavelengths will be in the visible for the 1 meter telescope and in the near IR on the 3 meter. The adaptive optics system design is based on a 69 actuator continuous surface deformable mirror and a Hartmann wavefront sensor equipped with an intensified CCD framing camera. The system has been tested at the Cassegrain focus of the 1 meter telescope where the subaperture size is 12.5 cm. The wavefront control calculations are performed on a four processor single board computer controlled by a Unix-based system. We will describe the optical system and give details of the wavefront control system design. We will present predictions of the system performance and initial test results.

  1. Adaptive optics at Lick Observatory: system architecture and operations

    NASA Astrophysics Data System (ADS)

    Brase, James M.; An, Jong; Avicola, Kenneth; Bissinger, Horst D.; Friedman, Herbert W.; Gavel, Donald T.; Johnston, Brooks; Max, Claire E.; Olivier, Scot S.; Presta, Robert W.; Rapp, David A.; Salmon, J. Thaddeus; Waltjen, Kenneth E.; Fisher, William A.

    1994-05-01

    We will describe an adaptive optics system developed for the 1 meter Nickel and 3 meter Shane telescopes at Lick Observatory. Observing wavelengths will be in the visible for the 1 meter telescope and in the near IR on the 3 meter. The adaptive optics system design is based on a 69 actuator continuous surface deformable mirror and a Hartmann wavefront sensor equipped with an intensified CCD framing camera. The system has been tested at the Cassegrain focus of the 1 meter telescope where the subaperture size is 12.5 cm. The wavefront control calculations are performed on a four processor single board computer controlled by a Unix-based system. We will describe the optical system and give details of the wavefront control system design. We will present predictions of the system performance and initial test results.

  2. Review of pre-FFT equalization techniques and their application to 4G

    NASA Astrophysics Data System (ADS)

    Armour, Simon; Doufexi, Angela; Nix, Andrew; Beach, Mark; McGeehan, J.

    2001-11-01

    In this paper a review of the Pre-FFT Equalization technique is presented with a particular focus on 4G applications. The essential concepts and motivations for the use of this technique are first presented. Subsequently, previous research of the topic both by the authors and others is reviewed. In particular, methods for implementing the Pre-FFT Equalizer itself and for adapting it are reviewed in detail. The issue of noise amplification and the use of Channel State Information in the COFDM system to mitigate this phenomenon are also discussed. Application of a Pre-FFT Equalizer to a possible, COFDM based, 4G standard is then discussed and software simulations used to demonstrate the benefits that can be achieved by a Pre-FFT Equalizer in a 4G system.

  3. Alternative Optical Architectures for Multichannel Adaptive Optical Processing

    DTIC Science & Technology

    1993-04-01

    performance of the system can also be improved if we note that the input of EdO ) need not be centered at 9a but could be cenitred at -AO+A4La so that...characterization of a multichannel adaptive system that can perform cancellation of multiple wideband (In r!ll) interference sources in the presence...development of a single-loop electronic canceller for improved phase stability after the AO tapped delay line system . 14. SUBJECT TERMS ,I PANUI OF PACES

  4. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  5. Adaptation of pancreatic islet cyto-architecture during development

    NASA Astrophysics Data System (ADS)

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2016-04-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1-35 weeks and 12-24 months.

  6. Adaptive changes in the kinetochore architecture facilitate proper spindle assembly

    PubMed Central

    Magidson, Valentin; Paul, Raja; Yang, Nachen; Ault, Jeffrey G.; O’Connell, Christopher B.; Tikhonenko, Irina; McEwen, Bruce F.; Mogilner, Alex; Khodjakov, Alexey

    2015-01-01

    Mitotic spindle formation relies on the stochastic capture of microtubules at kinetochores. Kinetochore architecture affects the efficiency and fidelity of this process with large kinetochores expected to accelerate assembly at the expense of accuracy, and smaller kinetochores to suppress errors at the expense of efficiency. We demonstrate that upon mitotic entry, kinetochores in cultured human cells form large crescents that subsequently compact into discrete structures on opposite sides of the centromere. This compaction occurs only after the formation of end-on microtubule attachments. Live-cell microscopy reveals that centromere rotation mediated by lateral kinetochore-microtubule interactions precedes formation of end-on attachments and kinetochore compaction. Computational analyses of kinetochore expansion-compaction in the context of lateral interactions correctly predict experimentally-observed spindle assembly times with reasonable error rates. The computational model suggests that larger kinetochores reduce both errors and assembly times, which can explain the robustness of spindle assembly and the functional significance of enlarged kinetochores. PMID:26258631

  7. Dimensions of Usability: Cougaar, Aglets and Adaptive Agent Architecture (AAA)

    SciTech Connect

    Haack, Jereme N.; Cowell, Andrew J.; Gorton, Ian

    2004-06-20

    Research and development organizations are constantly evaluating new technologies in order to implement the next generation of advanced applications. At Pacific Northwest National Laboratory, agent technologies are perceived as an approach that can provide a competitive advantage in the construction of highly sophisticated software systems in a range of application areas. An important factor in selecting a successful agent architecture is the level of support it provides the developer in respect to developer support, examples of use, integration into current workflow and community support. Without such assistance, the developer must invest more effort into learning instead of applying the technology. Like many other applied research organizations, our staff are not dedicated to a single project and must acquire new skills as required, underlining the importance of being able to quickly become proficient. A project was instigated to evaluate three candidate agent toolkits across the dimensions of support they provide. This paper reports on the outcomes of this evaluation and provides insights into the agent technologies evaluated.

  8. An optimized, universal hardware-based adaptive correlation receiver architecture

    NASA Astrophysics Data System (ADS)

    Zhu, Zaidi; Suarez, Hernan; Zhang, Yan; Wang, Shang

    2014-05-01

    The traditional radar RF transceivers, similar to communication transceivers, have the basic elements such as baseband waveform processing, IF/RF up-down conversion, transmitter power circuits, receiver front-ends, and antennas, which are shown in the upper half of Figure 1. For modern radars with diversified and sophisticated waveforms, we can frequently observe that the transceiver behaviors, especially nonlinear behaviors, are depending on the waveform amplitudes, frequency contents and instantaneous phases. Usually, it is a troublesome process to tune an RF transceiver to optimum when different waveforms are used. Another issue arises from the interference caused by the waveforms - for example, the range side-lobe (RSL) caused by the waveforms, once the signals pass through the entire transceiver chain, may be further increased due to distortions. This study is inspired by the two existing solutions from commercial communication industry, digital pre-distortion (DPD) and adaptive channel estimation and Interference Mitigation (AIM), while combining these technologies into a single chip or board that can be inserted into the existing transceiver system. This device is then named RF Transceiver Optimizer (RTO). The lower half of Figure 1 shows the basic element of RTO. With RTO, the digital baseband processing does not need to take into account the transceiver performance with diversified waveforms, such as the transmitter efficiency and chain distortion (and the intermodulation products caused by distortions). Neither does it need to concern the pulse compression (or correlation receiver) process and the related mitigation. The focus is simply the information about the ground truth carried by the main peak of correlation receiver outputs. RTO can be considered as an extension of the existing calibration process, while it has the benefits of automatic, adaptive and universal. Currently, the main techniques to implement the RTO are the digital pre- or -post

  9. A hybrid behavioural rule of adaptation and drift explains the emergent architecture of antagonistic networks

    PubMed Central

    Nuwagaba, S.; Zhang, F.; Hui, C.

    2015-01-01

    Ecological processes that can realistically account for network architectures are central to our understanding of how species assemble and function in ecosystems. Consumer species are constantly selecting and adjusting which resource species are to be exploited in an antagonistic network. Here we incorporate a hybrid behavioural rule of adaptive interaction switching and random drift into a bipartite network model. Predictions are insensitive to the model parameters and the initial network structures, and agree extremely well with the observed levels of modularity, nestedness and node-degree distributions for 61 real networks. Evolutionary and community assemblage histories only indirectly affect network structure by defining the size and complexity of ecological networks, whereas adaptive interaction switching and random drift carve out the details of network architecture at the faster ecological time scale. The hybrid behavioural rule of both adaptation and drift could well be the key processes for structure emergence in real ecological networks. PMID:25925104

  10. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication.

  11. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies.

    PubMed

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-05-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima.

  12. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies

    PubMed Central

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-01-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima. PMID:25806542

  13. Efficient Two-Dimensional-FFT Program

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1992-01-01

    Program computes 64 X 64-point fast Fourier transform in less than 17 microseconds. Optimized 64 X 64 Point Two-Dimensional Fast Fourier Transform combines performance of real- and complex-valued one-dimensional fast Fourier transforms (FFT's) to execute two-dimensional FFT and coefficients of power spectrum. Coefficients used in many applications, including analyzing spectra, convolution, digital filtering, processing images, and compressing data. Source code written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly languages.

  14. Adaptive digital beamforming architecture and algorithm for nulling mainlobe and multiple sidelobe jammers

    NASA Astrophysics Data System (ADS)

    Yu, Kai-Bor; Murrow, David J.

    1999-11-01

    This paper describes a digital beamforming architecture for nulling a mainlobe jammer and multiple sidelobe jammers while maintaining the angle estimation accuracy of the monopulse ratio. A sidelobe jammer canceling adaptive array is cascaded with a mainlobe jammer canceller, imposing a mainlobe maintenance technique or constrained adaptation during sidelobe cancellation process so the results of sidelobe jammer cancellation process do not distort subsequent mainlobe cancellation process. The sidelobe jammers and the mainlobe jammer are thus cancelled sequentially in separate processes. This adaptive digital beamforming technique is for improving radar processing for determining the angular location of a target, and specifically to an improvement in the monopulse technique so as to maintain accuracy of the monopulse ratio in the presence of jamming by adaptive suppression of jamming before forming the sum and difference beams.

  15. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  16. Improving transient performance of adaptive control architectures using frequency-limited system error dynamics

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.

    2014-11-01

    Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.

  17. Spatially constrained adaptive rewiring in cortical networks creates spatially modular small world architectures.

    PubMed

    Jarman, Nicholas; Trengove, Chris; Steur, Erik; Tyukin, Ivan; van Leeuwen, Cees

    2014-12-01

    A modular small-world topology in functional and anatomical networks of the cortex is eminently suitable as an information processing architecture. This structure was shown in model studies to arise adaptively; it emerges through rewiring of network connections according to patterns of synchrony in ongoing oscillatory neural activity. However, in order to improve the applicability of such models to the cortex, spatial characteristics of cortical connectivity need to be respected, which were previously neglected. For this purpose we consider networks endowed with a metric by embedding them into a physical space. We provide an adaptive rewiring model with a spatial distance function and a corresponding spatially local rewiring bias. The spatially constrained adaptive rewiring principle is able to steer the evolving network topology to small world status, even more consistently so than without spatial constraints. Locally biased adaptive rewiring results in a spatial layout of the connectivity structure, in which topologically segregated modules correspond to spatially segregated regions, and these regions are linked by long-range connections. The principle of locally biased adaptive rewiring, thus, may explain both the topological connectivity structure and spatial distribution of connections between neuronal units in a large-scale cortical architecture.

  18. CZT vs FFT: Flexibility vs Speed

    SciTech Connect

    S. Sirin

    2003-10-01

    Bluestein's Fast Fourier Transform (FFT), commonly called the Chirp-Z Transform (CZT), is a little-known algorithm that offers engineers a high-resolution FFT combined with the ability to specify bandwidth. In the field of digital signal processing, engineers are always challenged to detect tones, frequencies, signatures, or some telltale sign that signifies a condition that must be indicated, ignored, or controlled. One of these challenges is to detect specific frequencies, for instance when looking for tones from telephones or detecting 60-Hz noise on power lines. The Goertzel algorithm described in Embedded Systems Programming, September 2002, offered a powerful tool toward finding specific frequencies faster than the FFT.Another challenge involves analyzing a range of frequencies, such as recording frequency response measurements, matching voice patterns, or displaying spectrum information on the face of an amateur radio. To meet this challenge most engineers use the well-known FFT. The CZT gives the engineer the flexibility to specify bandwidth and outputs real and imaginary frequency components from which the magnitude and phase can be computed. A description of the CZT and a discussion of the advantages and disadvantages of CZT versus the FFT and Goertzel algorithms will be followed by situations in which the CZT would shine. The reader will find that the CZT is very useful but that flexibility has a price.

  19. The architecture of an event correlation service for adaptive middleware-based applications

    SciTech Connect

    Liu, Yan; Gorton, Ian; Lee, Vinh Kah

    2008-12-01

    Loosely coupled component communication driven by events is a key mechanism for building middleware- based applications that must achieve reliable qualities of service in an adaptive manner. In such a system, events that encapsulate state snapshots of a running system are generated by monitoring components. Hence, an event correlation service is necessary for correlating monitored events from multiple sources. The requirements for the event correlation raise two challenges: to seamlessly integrate event correlation services with other services and applications; and to provide reliable event management with minimal delay. This paper describes our experience in the design and implementation of an event correlation service. The design encompasses an event correlator and an event proxy that are integrated with an architecture for adaptive middleware components. The implementation utilizes the common-based event (CBE) specification and stateful Web service technologies to support the deployment of the event correlation service in a distributed architecture. We evaluate the performance of the overall solution in a test bed and present the results in terms of the trade-off between the flexibility and the performance overhead of the architecture

  20. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  1. A biomimetic adaptive algorithm and low-power architecture for implantable neural decoders.

    PubMed

    Rapoport, Benjamin I; Wattanapanitch, Woradorn; Penagos, Hector L; Musallam, Sam; Andersen, Richard A; Sarpeshkar, Rahul

    2009-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat.

  2. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies

    PubMed Central

    Supple, Megan A.; Hines, Heather M.; Dasmahapatra, Kanchon K.; Lewis, James J.; Nielsen, Dahlia M.; Lavoie, Christine; Ray, David A.; Salazar, Camilo; McMillan, W. Owen; Counterman, Brian A.

    2013-01-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  3. AdaRTE: adaptable dialogue architecture and runtime engine. A new architecture for health-care dialogue systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2007-01-01

    Spoken dialogue systems have been increasingly employed to provide ubiquitous automated access via telephone to information and services for the non-Internet-connected public. In the health care context, dialogue systems have been successfully applied. Nevertheless, speech-based technology is not easy to implement because it requires a considerable development investment. The advent of VoiceXML for voice applications contributed to reduce the proliferation of incompatible dialogue interpreters, but introduced new complexity. As a response to these issues, we designed an architecture for dialogue representation and interpretation, AdaRTE, which allows developers to layout dialogue interactions through a high level formalism that offers both declarative and procedural features. AdaRTE aim is to provide a ground for deploying complex and adaptable dialogues whilst allows the experimentation and incremental adoption of innovative speech technologies. It provides the dynamic behavior of Augmented Transition Networks and enables the generation of different backends formats such as VoiceXML. It is especially targeted to the health care context, where a framework for easy dialogue deployment could reduce the barrier for a more widespread adoption of dialogue systems.

  4. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  5. A CORDIC based FFT processor for MIMO channel emulator

    NASA Astrophysics Data System (ADS)

    Xiong, Yanwei; Zhang, Jianhua; Zhang, Ping

    2013-03-01

    With the advent of Multi Input Multi Output (MIMO) systems, the system performance is highly dependent on the accurate representation of the channel condition that causes the wireless channel emulation to become increasingly important. The conventional Finite Impulse Response (FIR) based emulator has a high real-time but the complexity rapidly becomes impractical for larger array sizes. However, the frequency domain approach can avoid this problem and reduce the complexity for higher order arrays. The complexity comparison between in time domain and in frequency domain is made in this paper. The Fast Fourier Transform (FFT) as an important component of signal processing in frequency domain is briefly introduced and an FGPA system architecture based on CORDIC algorithm is proposed. The full design is implemented in Xilinx's Virtex-5.

  6. Reduction of influence of gain errors on performance of adaptive sub-ranging A/D converters with simplified architecture

    NASA Astrophysics Data System (ADS)

    Jedrzejewski, Konrad; Malkiewicz, Łukasz

    2016-09-01

    The paper presents the results of studies pertaining to the influence of gain errors of inter-stage amplifiers on performance of adaptive sub-ranging analog-to-digital converters (ADCs). It focuses on adaptive sub-ranging ADCs with simplified architecture of the analog part - using only one amplifier and a low resolution digital-to-analog converter, that is identical to that of known conventional sub-ranging ADCs. The only difference between adaptive subranging ADCs with simplified architecture and conventional sub-ranging ADCs is the process of determination of output codes of converted samples. The adaptive sub-ranging ADCs calculate the output codes on the basis of sub-codes obtained in particular stages of conversion using an adaptive algorithm. Thanks to application of the optimal adaptive algorithm, adjusted to the parameters of possible components imperfections and internal noises, the adaptive ADCs outperform, in terms of effective resolution per cycle, conventional sub-ranging ADCs forming the output codes using simple lower-level bit operations. Optimization of the conversion algorithm used in adaptive ADCs leads however to high sensitivity of adaptive ADCs performance to the inter-stage gain error. An effective method for reduction of this sensitivity in adaptive sub-ranging ADCs with simplified architecture is proposed and discussed in the paper.

  7. Designing a meta-level architecture in Java for adaptive parallelism by mobile software agents

    NASA Astrophysics Data System (ADS)

    Dominic, Stephen Victor

    Adaptive parallelism refers to a parallel computation that runs on a pool of processors that may join or withdraw from a running computation. In this dissertation, a functional system of agents and agent behaviors for adaptive parallelism is developed. Software agents have the properties of robustness and have capacity for fault-tolerance. Adaptation and fault-tolerance emerge from the interaction of self-directed autonomous software agents for a parallel computation application. The multi-agent system can be considered an object-oriented system with a higher-level architectural component, i.e., a meta level for agent behavior. The meta-level object architecture is based on patterns of behavior and communication for mobile agents, which are developed to support cooperative problem solving in a distributed-heterogeneous computing environment. Although parallel processing is a suggested application domain for mobile agents implemented in the Java language, the development of robust agent behaviors implemented in an efficient manner is an active research area. Performance characteristics for three versions of a pattern recognition problem are used to demonstrate a linear speed-up with efficiency that is compared to research using a traditional client-server protocol in the C language. The best ideas from existing approaches to adaptive parallelism are used to create a single general-purpose paradigm that overcomes problems associated with nodefailure, the use of a single-centralized or shared resource, requirements for clients to actively join a computation, and a variety of other limitations that are associated with existing systems. The multi-agent system, and experiments, show how adaptation and parallelism can be exploited by a meta-architecture for a distributed-scientific application that is of particular interest to design of signal-processing ground stations. To a large extent the framework separates concern for algorithmic design from concern for where and

  8. The Telesupervised Adaptive Ocean Sensor Fleet (TAOSF) Architecture: Coordination of Multiple Oceanic Robot Boats

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Higinbotham, John; Moisan, John R.; Moisan, Tiffany A.; Kulczycki, Eric A.

    2008-01-01

    Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth s climate and ecology. Ocean sensing is typically done with satellites, buoys, and crewed research ships. The limitations of these systems include the fact that satellites are often blocked by cloud cover, and buoys and ships have spatial coverage limitations. This paper describes a multi-robot science exploration software architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF supervises and coordinates a group of robotic boats, the OASIS platforms, to enable in-situ study of phenomena in the ocean/atmosphere interface, as well as on the ocean surface and sub-surface. The OASIS platforms are extended deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). TAOSF allows a human operator to effectively supervise and coordinate multiple robotic assets using a sliding autonomy control architecture, where the operating mode of the vessels ranges from autonomous control to teleoperated human control. TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for robotic asset tasking, control, and monitoring. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). We discuss the overall TAOSF architecture, describe field tests conducted under controlled conditions using rhodamine dye as a HAB simulant, present initial results from these tests, and outline the next steps in the development of TAOSF.

  9. Development and Flight Testing of an Adaptable Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.; Taylor, B. Douglas; Brett, Rube R.

    2003-01-01

    Development and testing of an adaptable wireless health-monitoring architecture for a vehicle fleet is presented. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained adaptable expert system. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate, and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear.

  10. Genetic architecture for the adaptive origin of annual wild rice, oryza nivara.

    PubMed

    Grillo, Michael A; Li, Changbao; Fowlkes, Angela M; Briggeman, Trevor M; Zhou, Ailing; Schemske, Douglas W; Sang, Tao

    2009-04-01

    The wild progenitors of cultivated rice, Oryza nivara and Oryza rufipogon, provide an experimental system for characterizing the genetic basis of adaptation. The evolution of annual O. nivara from a perennial ancestor resembling its sister species, O. rufipogon, was associated with an ecological shift from persistently wet to seasonally dry habitats. Here we report a quantitative trait locus (QTL) analysis of phenotypic differentiation in life history, mating system, and flowering time between O. nivara and O. rufipogon. The exponential distribution of effect sizes of QTL fits the prediction of a recently proposed population genetic model of adaptation. More than 80% of QTL alleles of O. nivara acted in the same direction of phenotypic evolution, suggesting that they were fixed under directional selection. The loss of photoperiod sensitivity, which might be essential to the survival of the ancestral populations of O. nivara in the new environment, was controlled by QTL of relatively large effect. Mating system evolution from cross- to self-fertilization through the modification of panicle and floral morphology was controlled by QTL of small-to-moderate effect. The lack of segregation of the recessive annual habit in the F(2) mapping populations suggested that the evolution of annual from perennial life form had a complex genetic basis. The study captured the genetic architecture for the adaptive origin of O. nivara and provides a foundation for rigorous experimental tests of population genetic theories of adaptation.

  11. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions1[OPEN

    PubMed Central

    Sandhu, Nitika; Raman, K. Anitha; Torres, Rolando O.; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-01-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity. PMID:27342311

  12. Development and Flight Testing of an Adaptive Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Taylor, B. Douglas; Brett, Rube R.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.

    2002-01-01

    On going development and testing of an adaptable vehicle health-monitoring architecture is presented. The architecture is being developed for a fleet of vehicles. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle, and, a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained expert system. The expert system is parameterized, which makes it adaptable to be trained to both a user's subject reasoning and existing quantitative analytic tools. Communication between all levels is done with wireless radio frequency interfaces. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear. The flight tests were performed to validate the following: the wireless radio frequency communication capabilities of the system, the hardware design, command and control; software operation and, data acquisition, storage and retrieval.

  13. FFT multislice method--the silver anniversary.

    PubMed

    Ishizuka, Kazuo

    2004-02-01

    The first paper on the FFT multislice method was published in 1977, a quarter of a century ago. The formula was extended in 1982 to include a large tilt of an incident beam relative to the specimen surface. Since then, with advances of computing power, the FFT multislice method has been successfully applied to coherent CBED and HAADF-STEM simulations. However, because the multislice formula is built on some physical approximations and approximations in numerical procedure, there seem to be controversial conclusions in the literature on the multislice method. In this report, the physical implication of the multislice method is reviewed based on the formula for the tilted illumination. Then, some results on the coherent CBED and the HAADF-STEM simulations are presented.

  14. The Filled Arm Fizeau Telescope (FFT)

    NASA Technical Reports Server (NTRS)

    Synnott, S. P.

    1991-01-01

    Attention is given to the design of a Mills Cross imaging interferometer in which the arms are fully filled with mirror segments of a Ritchey-Chretien primary and which has sensitivity to 27th magnitude per pixel and resolution a factor of 10 greater than Hubble. The optical design, structural configuration, thermal disturbances, and vibration, material, control, and metrology issues, as well as scientific capabilities are discussed, and technology needs are identified. The technologies under consideration are similar to those required for the development of the other imaging interferometers that have been proposed over the past decade. A comparison of the imaging capabilities of a 30-m diameter FFT, an 8-m telescope with a collecting area equal to that of the FFT, and the HST is presented.

  15. The Filled Arm Fizeau Telescope (FFT)

    NASA Astrophysics Data System (ADS)

    Synnott, S. P.

    1991-09-01

    Attention is given to the design of a Mills Cross imaging interferometer in which the arms are fully filled with mirror segments of a Ritchey-Chretien primary and which has sensitivity to 27th magnitude per pixel and resolution a factor of 10 greater than Hubble. The optical design, structural configuration, thermal disturbances, and vibration, material, control, and metrology issues, as well as scientific capabilities are discussed, and technology needs are identified. The technologies under consideration are similar to those required for the development of the other imaging interferometers that have been proposed over the past decade. A comparison of the imaging capabilities of a 30-m diameter FFT, an 8-m telescope with a collecting area equal to that of the FFT, and the HST is presented.

  16. FFT-local gravimetric geoid computation

    NASA Technical Reports Server (NTRS)

    Nagy, Dezso; Fury, Rudolf J.

    1989-01-01

    Model computations show that changes of sampling interval introduce only 0.3 cm changes, whereas zero padding provides an improvement of more than 5 cm in the fast Fourier transformation (FFT) generated geoid. For the Global Positioning System (GPS) survey of Franklin County, Ohio, the parameters selected as a result of model computations, allow large reduction in local data requirements while still retaining the cm accuracy when tapering and padding is applied. The results are shown in tables.

  17. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  18. A hardware architecture for a context-adaptive binary arithmetic coder

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania; Cohen, Adam

    2005-03-01

    The H.264 video compression standard uses a context-adaptive binary arithmetic coder (CABAC) as an entropy coding mechanism. While the coder provides excellent compression efficiency, it is computationally demanding. On typical general-purpose processors, it can take up to hundreds of cycles to encode a single bit. In this paper, we propose an architecture for a CABAC encoder that can easily be incorporated into system-on-chip designs for H.264 compression. The CABAC is inherently serial and we divide the problem into several stages to derive a design that can provide a throughput of two cycles per encoded bit. The engine proposed is capable of handling binarization of the syntactical elements and provides the coded bit-stream via a first-in first-out buffer. The design is implemented on an Altera FPGA platform that can run at 50 MHz enabling a 25 Mbps encoding rate.

  19. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training.

    PubMed

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3-5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = -0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  20. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training

    PubMed Central

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P.

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3–5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = −0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  1. Helix-length compensation studies reveal the adaptability of the VS ribozyme architecture.

    PubMed

    Lacroix-Labonté, Julie; Girard, Nicolas; Lemieux, Sébastien; Legault, Pascale

    2012-03-01

    Compensatory mutations in RNA are generally regarded as those that maintain base pairing, and their identification forms the basis of phylogenetic predictions of RNA secondary structure. However, other types of compensatory mutations can provide higher-order structural and evolutionary information. Here, we present a helix-length compensation study for investigating structure-function relationships in RNA. The approach is demonstrated for stem-loop I and stem-loop V of the Neurospora VS ribozyme, which form a kissing-loop interaction important for substrate recognition. To rapidly characterize the substrate specificity (k(cat)/K(M)) of several substrate/ribozyme pairs, a procedure was established for simultaneous kinetic characterization of multiple substrates. Several active substrate/ribozyme pairs were identified, indicating the presence of limited substrate promiscuity for stem Ib variants and helix-length compensation between stems Ib and V. 3D models of the I/V interaction were generated that are compatible with the kinetic data. These models further illustrate the adaptability of the VS ribozyme architecture for substrate cleavage and provide global structural information on the I/V kissing-loop interaction. By exploring higher-order compensatory mutations in RNA our approach brings a deeper understanding of the adaptability of RNA structure, while opening new avenues for RNA research.

  2. Mapping the genomic architecture of adaptive traits with interspecific introgressive origin: a coalescent-based approach.

    PubMed

    Hejase, Hussein A; Liu, Kevin J

    2016-01-11

    Recent studies of eukaryotes including human and Neandertal, mice, and butterflies have highlighted the major role that interspecific introgression has played in adaptive trait evolution. A common question arises in each case: what is the genomic architecture of the introgressed traits? One common approach that can be used to address this question is association mapping, which looks for genotypic markers that have significant statistical association with a trait. It is well understood that sample relatedness can be a confounding factor in association mapping studies if not properly accounted for. Introgression and other evolutionary processes (e.g., incomplete lineage sorting) typically introduce variation among local genealogies, which can also differ from global sample structure measured across all genomic loci. In contrast, state-of-the-art association mapping methods assume fixed sample relatedness across the genome, which can lead to spurious inference. We therefore propose a new association mapping method called Coal-Map, which uses coalescent-based models to capture local genealogical variation alongside global sample structure. Using simulated and empirical data reflecting a range of evolutionary scenarios, we compare the performance of Coal-Map against EIGENSTRAT, a leading association mapping method in terms of its popularity, power, and type I error control. Our empirical data makes use of hundreds of mouse genomes for which adaptive interspecific introgression has recently been described. We found that Coal-Map's performance is comparable or better than EIGENSTRAT in terms of statistical power and false positive rate. Coal-Map's performance advantage was greatest on model conditions that most closely resembled empirically observed scenarios of adaptive introgression. These conditions had: (1) causal SNPs contained in one or a few introgressed genomic loci and (2) varying rates of gene flow - from high rates to very low rates where incomplete lineage

  3. The Arab Vernacular Architecture and its Adaptation to Mediterranean Climatic Zones

    NASA Astrophysics Data System (ADS)

    Paz, Shlomit; Hamza, Efat

    2014-05-01

    Throughout history people have employed building strategies adapted to local climatic conditions in an attempt to achieve thermal comfort in their homes. In the Mediterranean climate, a mixed strategy developed - utilizing positive parameters (e.g. natural lighting), while at the same time addressing negative variables (e.g. high temperatures during summer). This study analyzes the adaptation of construction strategies of traditional Arab houses to Mediterranean climatic conditions. It is based on the assumption that the climate of the eastern Mediterranean led to development of unique architectural patterns. The way in which the inhabitants chose to build their homes was modest but creative in the context of climate awareness, with simple ideas. These were often instinctive responses to climate challenges. Nine traditional Arab houses, built from the mid-19th century to the beginning of the 20th century, were analyzed in three different regions in Israel: the "Meshulash" - an area in the center of the country, and the Lower and Upper Galilees (in the north). In each region three houses were examined. It is important to note that only a few houses from these periods still remain, particularly in light of new construction in many of the villages' core areas. Qualitative research methodologies included documentation of all the elements of these traditional houses which were assumed to be a result of climatic factors, such as - house position (direction), thickness of walls, thermal mass, ceiling height, location of windows, natural ventilation, exterior wall colors and shading strategies. Additionally, air temperatures and relative humidity were measured at selected dates throughout all seasons both inside and immediately outside the houses during morning, noon, evening and night-time hours. The documentation of the architectural elements and strategies demonstrate that climatic considerations were an integral part of the planning and construction process of these

  4. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  5. The adaptive nature of eye movements in linguistic tasks: how payoff and architecture shape speed-accuracy trade-offs.

    PubMed

    Lewis, Richard L; Shvartsman, Michael; Singh, Satinder

    2013-07-01

    We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture.

  6. The genetic architecture of local adaptation and reproductive isolation in sympatry within the Mimulus guttatus species complex.

    PubMed

    Ferris, Kathleen G; Barnett, Laryssa L; Blackman, Benjamin K; Willis, John H

    2017-01-01

    The genetic architecture of local adaptation has been of central interest to evolutionary biologists since the modern synthesis. In addition to classic theory on the effect size of adaptive mutations by Fisher, Kimura and Orr, recent theory addresses the genetic architecture of local adaptation in the face of ongoing gene flow. This theory predicts that with substantial gene flow between populations local adaptation should proceed primarily through mutations of large effect or tightly linked clusters of smaller effect loci. In this study, we investigate the genetic architecture of divergence in flowering time, mating system-related traits, and leaf shape between Mimulus laciniatus and a sympatric population of its close relative M. guttatus. These three traits are probably involved in M. laciniatus' adaptation to a dry, exposed granite outcrop environment. Flowering time and mating system differences are also reproductive isolating barriers making them 'magic traits'. Phenotypic hybrids in this population provide evidence of recent gene flow. Using next-generation sequencing, we generate dense SNP markers across the genome and map quantitative trait loci (QTLs) involved in flowering time, flower size and leaf shape. We find that interspecific divergence in all three traits is due to few QTL of large effect including a highly pleiotropic QTL on chromosome 8. This QTL region contains the pleiotropic candidate gene TCP4 and is involved in ecologically important phenotypes in other Mimulus species. Our results are consistent with theory, indicating that local adaptation and reproductive isolation with gene flow should be due to few loci with large and pleiotropic effects.

  7. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-08-16

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.

  8. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems

    PubMed Central

    Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  9. Adapted Verbal Feedback, Instructor Interaction and Student Emotions in the Landscape Architecture Studio

    ERIC Educational Resources Information Center

    Smith, Carl A.; Boyer, Mark E.

    2015-01-01

    In light of concerns with architectural students' emotional jeopardy during traditional desk and final-jury critiques, the authors pursue alternative approaches intended to provide more supportive and mentoring verbal assessment in landscape architecture studios. In addition to traditional studio-based critiques throughout a semester, we provide…

  10. When History Repeats Itself: Exploring the Genetic Architecture of Host-Plant Adaptation in Two Closely Related Lepidopteran Species

    PubMed Central

    Alexandre, Hermine; Ponsard, Sergine; Bourguet, Denis; Vitalis, Renaud; Audiot, Philippe; Cros-Arteil, Sandrine; Streiff, Réjane

    2013-01-01

    The genus Ostrinia includes two allopatric maize pests across Eurasia, namely the European corn borer (ECB, O. nubilalis) and the Asian corn borer (ACB, O. furnacalis). A third species, the Adzuki bean borer (ABB, O. scapulalis), occurs in sympatry with both the ECB and the ACB. The ABB mostly feeds on native dicots, which probably correspond to the ancestral host plant type for the genus Ostrinia. This situation offers the opportunity to characterize the two presumably independent adaptations or preadaptations to maize that occurred in the ECB and ACB. In the present study, we aimed at deciphering the genetic architecture of these two adaptations to maize, a monocot host plant recently introduced into Eurasia. To this end, we performed a genome scan analysis based on 684 AFLP markers in 12 populations of ECB, ACB and ABB. We detected 2 outlier AFLP loci when comparing French populations of the ECB and ABB, and 9 outliers when comparing Chinese populations of the ACB and ABB. These outliers were different in both countries, and we found no evidence of linkage disequilibrium between any two of them. These results suggest that adaptation or preadaptation to maize relies on a different genetic architecture in the ECB and ACB. However, this conclusion must be considered in light of the constraints inherent to genome scan approaches and of the intricate evolution of adaptation and reproductive isolation in the Ostrinia spp. complex. PMID:23874914

  11. Use of mixed radix FFT in electric power systems studies

    SciTech Connect

    Lu, I.D.; Lee, P. )

    1994-07-01

    Radix-2 based Fast Fourier Transform (FFT) routines have been the main stream FFT that are commonly applied to the measurement and analysis of electric power system data. Because of the rigid sampling rates offered by most of the instrument manufacturers and the mathematical limitation imposed by the algorithm of radix-2 FFT, artificial post sampling data windows have been introduced to improve the utility of the radix-2 FFT. The 60 Hz and the 50 Hz based electrical power system frequencies are incompatible with the radix-2 FFT routine to the special requirements of electric power system. Recent advances in the personal computer hardware has opened a new approach to perform the FFT via mixed radix routines. It offers greatly improved flexibility in the selection of a practical data size. And this leads to the elimination of the need of the post sampling software windows. It also allows one to relax or eliminate the requirement of anti-aliasing measures for power system harmonic measurements.

  12. Bone architecture adaptations after spinal cord injury: impact of long-term vibration of a constrained lower limb

    PubMed Central

    Dudley-Javoroski, S.; Petrie, M. A.; McHenry, C. L.; Amelon, R. E.; Saha, P. K.

    2015-01-01

    Summary This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. Introduction The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Methods Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Results Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a “steady state” of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. Conclusions This study supports that vibration training, using this study’s dose parameters, is not an effective antiosteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury. PMID:26395887

  13. Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes

    PubMed Central

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  14. Adaptive software architecture based on confident HCI for the deployment of sensitive services in Smart Homes.

    PubMed

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-03-25

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature.

  15. Architectural Models of Adaptive Hypermedia Based on the Use of Ontologies

    ERIC Educational Resources Information Center

    Souhaib, Aammou; Mohamed, Khaldi; Eddine, El Kadiri Kamal

    2011-01-01

    The domain of traditional hypermedia is revolutionized by the arrival of the concept of adaptation. Currently, the domain of AHS (adaptive hypermedia systems) is constantly growing. A major goal of current research is to provide a personalized educational experience that meets the needs specific to each learner (knowledge level, goals, motivation,…

  16. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  17. Natural Variation of Arabidopsis Root Architecture Reveals Complementing Adaptive Strategies to Potassium Starvation1[C][W][OA

    PubMed Central

    Kellermeier, Fabian; Chardon, Fabien; Amtmann, Anna

    2013-01-01

    Root architecture is a highly plastic and environmentally responsive trait that enables plants to counteract nutrient scarcities with different foraging strategies. In potassium (K) deficiency (low K), seedlings of the Arabidopsis (Arabidopsis thaliana) reference accession Columbia (Col-0) show a strong reduction of lateral root elongation. To date, it is not clear whether this is a direct consequence of the lack of K as an osmoticum or a triggered response to maintain the growth of other organs under limiting conditions. In this study, we made use of natural variation within Arabidopsis to look for novel root architectural responses to low K. A comprehensive set of 14 differentially responding root parameters were quantified in K-starved and K-replete plants. We identified a phenotypic gradient that links two extreme strategies of morphological adaptation to low K arising from a major tradeoff between main root (MR) and lateral root elongation. Accessions adopting strategy I (e.g. Col-0) maintained MR growth but compromised lateral root elongation, whereas strategy II genotypes (e.g. Catania-1) arrested MR elongation in favor of lateral branching. K resupply and histochemical staining resolved the temporal and spatial patterns of these responses. Quantitative trait locus analysis of K-dependent root architectures within a Col-0 × Catania-1 recombinant inbred line population identified several loci each of which determined a particular subset of root architectural parameters. Our results indicate the existence of genomic hubs in the coordinated control of root growth in stress conditions and provide resources to facilitate the identification of the underlying genes. PMID:23329148

  18. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  19. Energy efficient low power shared-memory Fast Fourier Transform (FFT) processor with dynamic voltage scaling

    NASA Astrophysics Data System (ADS)

    Fitrio, D.; Singh, J.; Stojcevski, A.

    2005-12-01

    Reduction of power dissipations in CMOS circuits needs to be addressed for portable battery devices. Selection of appropriate transistor library to minimise leakage current, implementation of low power design architectures, power management implementation, and the choice of chip packaging, all have impact on power dissipation and are important considerations in design and implementation of integrated circuits for low power applications. Energy-efficient architecture is highly desirable for battery operated systems, which operates in a wide variation of operating scenarios. Energy-efficient design aims to reconfigure its own architectures to scale down energy consumption depending upon the throughput and quality requirement. An energy efficient system should be able to decide its minimum power requirements by dynamically scaling its own operating frequency, supply voltage or the threshold voltage according to a variety of operating scenarios. The increasing product demand for application specific integrated circuit or processor for independent portable devices has influenced designers to implement dedicated processors with ultra low power requirements. One of these dedicated processors is a Fast Fourier Transform (FFT) processor, which is widely used in signal processing for numerous applications such as, wireless telecommunication and biomedical applications where the demand for extended battery life is extremely high. This paper presents the design and performance analysis of a low power shared memory FFT processor incorporating dynamic voltage scaling. Dynamic voltage scaling enables power supply scaling into various supply voltage levels. The concept behind the proposed solution is that if the speed of the main logic core can be adjusted according to input load or amount of processor's computation "just enough" to meet the requirement. The design was implemented using 0.12 μm ST-Microelectronic 6-metal layer CMOS dual- process technology in Cadence Analogue

  20. Engine Fault Diagnosis using DTW, MFCC and FFT

    NASA Astrophysics Data System (ADS)

    Singh, Vrijendra; Meena, Narendra

    . In this paper we have used a combination of three algorithms: Dynamic time warping (DTW) and the coefficients of Mel frequency Cepstrum (MFC) and Fast Fourier Transformation (FFT) for classifying various engine faults. Dynamic time warping and MFCC (Mel Frequency Cepstral Coefficients), FFT are used usually for automatic speech recognition purposes. This paper introduces DTW algorithm and the coefficients extracted from Mel Frequency Cepstrum, FFT for automatic fault detection and identification (FDI) of internal combustion engines for the first time. The objective of the current work was to develop a new intelligent system that should be able to predict the possible fault in a running engine at different-different workshops. We are doing this first time. Basically we took different-different samples of Engine fault and applied these algorithms, extracted features from it and used Fuzzy Rule Base approach for fault Classification.

  1. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  2. Interpolation And FFT Of Near-Field Antenna Measurements

    NASA Technical Reports Server (NTRS)

    Gatti, Mark S.; Rahmat-Samii, Yahya

    1990-01-01

    Bivariate Lagrange interpolation applied to plane-polar measurement scans. Report discusses recent advances in application of fast-Fourier-transform (FFT) techniques to measurements of near radiation fields of antennas on plane-polar grid. Attention focused mainly on use of such measurements to calculate far radiation fields. Also discussion of use of FFT's in holographic diagnosis of distortions of antenna reflectors. Advantage of scheme, it speeds calculations because it requires fewer data and manipulations of data than other schemes used for this purpose.

  3. Academic Accountability and University Adaptation: The Architecture of an Academic Learning Organization.

    ERIC Educational Resources Information Center

    Dill, David D.

    1999-01-01

    Discussses various adaptations in organizational structure and governance of academic learning institutions, using case studies of universities that are attempting to improve the quality of teaching and the learning process. Identifies five characteristics typical of such organizations: (1) a culture of evidence; (2) improved coordination of…

  4. Architecture for an Adaptive and Intelligent Tutoring System That Considers the Learner's Multiple Intelligences

    ERIC Educational Resources Information Center

    Hafidi, Mohamed; Bensebaa, Taher

    2015-01-01

    The majority of adaptive and intelligent tutoring systems (AITS) are dedicated to a specific domain, allowing them to offer accurate models of the domain and the learner. The analysis produced from traces left by the users is didactically very precise and specific to the domain in question. It allows one to guide the learner in case of difficulty…

  5. Evolution of genomic structural variation and genomic architecture in the adaptive radiations of African cichlid fishes

    PubMed Central

    Fan, Shaohua; Meyer, Axel

    2014-01-01

    African cichlid fishes are an ideal system for studying explosive rates of speciation and the origin of diversity in adaptive radiation. Within the last few million years, more than 2000 species have evolved in the Great Lakes of East Africa, the largest adaptive radiation in vertebrates. These young species show spectacular diversity in their coloration, morphology and behavior. However, little is known about the genomic basis of this astonishing diversity. Recently, five African cichlid genomes were sequenced, including that of the Nile Tilapia (Oreochromis niloticus), a basal and only relatively moderately diversified lineage, and the genomes of four representative endemic species of the adaptive radiations, Neolamprologus brichardi, Astatotilapia burtoni, Metriaclima zebra, and Pundamila nyererei. Using the Tilapia genome as a reference genome, we generated a high-resolution genomic variation map, consisting of single nucleotide polymorphisms (SNPs), short insertions and deletions (indels), inversions and deletions. In total, around 18.8, 17.7, 17.0, and 17.0 million SNPs, 2.3, 2.2, 1.4, and 1.9 million indels, 262, 306, 162, and 154 inversions, and 3509, 2705, 2710, and 2634 deletions were inferred to have evolved in N. brichardi, A. burtoni, P. nyererei, and M. zebra, respectively. Many of these variations affected the annotated gene regions in the genome. Different patterns of genetic variation were detected during the adaptive radiation of African cichlid fishes. For SNPs, the highest rate of evolution was detected in the common ancestor of N. brichardi, A. burtoni, P. nyererei, and M. zebra. However, for the evolution of inversions and deletions, we found that the rates at the terminal taxa are substantially higher than the rates at the ancestral lineages. The high-resolution map provides an ideal opportunity to understand the genomic bases of the adaptive radiation of African cichlid fishes. PMID:24917883

  6. Extending the S-FFT direct-methods algorithm to density functions with positive and negative peaks. XIV.

    PubMed

    Rius, Jordi; Frontera, Carles

    2008-11-01

    Some years ago the direct-methods origin-free modulus sum function (S) was adapted to the processing of intensity data from density functions with positive and negative peaks [Rius, Miravitlles & Allmann (1996). Acta Cryst. A52, 634-639]. That implementation used phase relationships explicitly. Although successfully applied to different situations where the number of reflections was small, its generalization to larger problems required avoiding the time-consuming manipulation of quartet terms. To circumvent this limitation, a modification of the recently introduced S-FFT algorithm (that maximizes S with only Fourier transforms) is presented here. The resulting S2-FFT algorithm is highly effective for crystal structures with at least one moderate scatterer in the unit cell. Test calculations have been performed on conventional single-crystal X-ray diffraction data, on neutron diffraction data of compounds with negative scatterers and on intensities of superstructure reflections to solve difference structures.

  7. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    A key constituent of the NASA Space Launch System (SLS) architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). This engine was selected largely due to the maturity and extensive experience gained through 30-plus years of service. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle and mounting it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration and operational details, there were also hardware upgrades needed. While the magnitude of effort is less than that needed to develop a new clean-sheet engine system, this paper describes some of the expected and unexpected challenges encountered to date on the path to the first flight of SLS.

  8. Elucidating the molecular architecture of adaptation via evolve and resequence experiments.

    PubMed

    Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier

    2015-10-01

    Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, then next-generation sequencing to analyse genetic changes. They enable molecular evolution to be monitored in real time on a genome-wide scale. Here, we review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and the complex multicellular organism Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different starting population sizes, levels of pre-existing genetic variation, recombination rates and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address.

  9. Elucidating the molecular architecture of adaptation via evolve and resequence experiments

    PubMed Central

    Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier

    2016-01-01

    Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, followed by next-generation sequencing. They enable molecular evolution to be monitored in real time at a genome-wide scale. We review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and complex multicellular Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different: starting population sizes, levels of pre-existing genetic variation, recombination rates, and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address. PMID:26347030

  10. Adaptive functional specialisation of architectural design and fibre type characteristics in agonist shoulder flexor muscles of the llama, Lama glama

    PubMed Central

    Graziotti, Guillermo H; Chamizo, Verónica E; Ríos, Clara; Acevedo, Luz M; Rodríguez-Menéndez, J M; Victorica, C; Rivero, José-Luis L

    2012-01-01

    Like other camelids, llamas (Lama glama) have the natural ability to pace (moving ipsilateral limbs in near synchronicity). But unlike the Old World camelids (bactrian and dromedary camels), they are well adapted for pacing at slower or moderate speeds in high-altitude habitats, having been described as good climbers and used as pack animals for centuries. In order to gain insight into skeletal muscle design and to ascertain its relationship with the llama’s characteristic locomotor behaviour, this study examined the correspondence between architecture and fibre types in two agonist muscles involved in shoulder flexion (M. teres major – TM and M. deltoideus, pars scapularis – DS and pars acromialis – DA). Architectural properties were found to be correlated with fibre-type characteristics both in DS (long fibres, low pinnation angle, fast-glycolytic fibre phenotype with abundant IIB fibres, small fibre size, reduced number of capillaries per fibre and low oxidative capacity) and in DA (short fibres, high pinnation angle, slow-oxidative fibre phenotype with numerous type I fibres, very sparse IIB fibres, and larger fibre size, abundant capillaries and high oxidative capacity). This correlation suggests a clear division of labour within the M. deltoideus of the llama, DS being involved in rapid flexion of the shoulder joint during the swing phase of the gait, and DA in joint stabilisation during the stance phase. However, the architectural design of the TM muscle (longer fibres and lower fibre pinnation angle) was not strictly matched with its fibre-type characteristics (very similar to those of the postural DA muscle). This unusual design suggests a dual function of the TM muscle both in active flexion of the shoulder and in passive support of the limb during the stance phase, pulling the forelimb to the trunk. This functional specialisation seems to be well suited to a quadruped species that needs to increase ipsilateral stability of the limb during the

  11. Adaptive functional specialisation of architectural design and fibre type characteristics in agonist shoulder flexor muscles of the llama, Lama glama.

    PubMed

    Graziotti, Guillermo H; Chamizo, Verónica E; Ríos, Clara; Acevedo, Luz M; Rodríguez-Menéndez, J M; Victorica, C; Rivero, José-Luis L

    2012-08-01

    Like other camelids, llamas (Lama glama) have the natural ability to pace (moving ipsilateral limbs in near synchronicity). But unlike the Old World camelids (bactrian and dromedary camels), they are well adapted for pacing at slower or moderate speeds in high-altitude habitats, having been described as good climbers and used as pack animals for centuries. In order to gain insight into skeletal muscle design and to ascertain its relationship with the llama's characteristic locomotor behaviour, this study examined the correspondence between architecture and fibre types in two agonist muscles involved in shoulder flexion (M. teres major - TM and M. deltoideus, pars scapularis - DS and pars acromialis - DA). Architectural properties were found to be correlated with fibre-type characteristics both in DS (long fibres, low pinnation angle, fast-glycolytic fibre phenotype with abundant IIB fibres, small fibre size, reduced number of capillaries per fibre and low oxidative capacity) and in DA (short fibres, high pinnation angle, slow-oxidative fibre phenotype with numerous type I fibres, very sparse IIB fibres, and larger fibre size, abundant capillaries and high oxidative capacity). This correlation suggests a clear division of labour within the M. deltoideus of the llama, DS being involved in rapid flexion of the shoulder joint during the swing phase of the gait, and DA in joint stabilisation during the stance phase. However, the architectural design of the TM muscle (longer fibres and lower fibre pinnation angle) was not strictly matched with its fibre-type characteristics (very similar to those of the postural DA muscle). This unusual design suggests a dual function of the TM muscle both in active flexion of the shoulder and in passive support of the limb during the stance phase, pulling the forelimb to the trunk. This functional specialisation seems to be well suited to a quadruped species that needs to increase ipsilateral stability of the limb during the support

  12. The Architecture of Iron Microbial Mats Reflects the Adaptation of Chemolithotrophic Iron Oxidation in Freshwater and Marine Environments

    PubMed Central

    Chan, Clara S.; McAllister, Sean M.; Leavitt, Anna H.; Glazer, Brian T.; Krepski, Sean T.; Emerson, David

    2016-01-01

    Microbes form mats with architectures that promote efficient metabolism within a particular physicochemical environment, thus studying mat structure helps us understand ecophysiology. Despite much research on chemolithotrophic Fe-oxidizing bacteria, Fe mat architecture has not been visualized because these delicate structures are easily disrupted. There are striking similarities between the biominerals that comprise freshwater and marine Fe mats, made by Beta- and Zetaproteobacteria, respectively. If these biominerals are assembled into mat structures with similar functional morphology, this would suggest that mat architecture is adapted to serve roles specific to Fe oxidation. To evaluate this, we combined light, confocal, and scanning electron microscopy of intact Fe microbial mats with experiments on sheath formation in culture, in order to understand mat developmental history and subsequently evaluate the connection between Fe oxidation and mat morphology. We sampled a freshwater sheath mat from Maine and marine stalk and sheath mats from Loihi Seamount hydrothermal vents, Hawaii. Mat morphology correlated to niche: stalks formed in steeper O2 gradients while sheaths were associated with low to undetectable O2 gradients. Fe-biomineralized filaments, twisted stalks or hollow sheaths, formed the highly porous framework of each mat. The mat-formers are keystone species, with nascent marine stalk-rich mats comprised of novel and uncommon Zetaproteobacteria. For all mats, filaments were locally highly parallel with similar morphologies, indicating that cells were synchronously tracking a chemical or physical cue. In the freshwater mat, cells inhabited sheath ends at the growing edge of the mat. Correspondingly, time lapse culture imaging showed that sheaths are made like stalks, with cells rapidly leaving behind an Fe oxide filament. The distinctive architecture common to all observed Fe mats appears to serve specific functions related to chemolithotrophic Fe

  13. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  14. Adapting a compact confocal microscope system to a two-photon excitation fluorescence imaging architecture.

    PubMed

    Diaspro, A; Corosu, M; Ramoino, P; Robello, M

    1999-11-01

    Within the framework of a national National Institute of Physics of Matter (INFM) project, we have realised a two-photon excitation (TPE) fluorescence microscope based on a new generation commercial confocal scanning head. The core of the architecture is a mode-locked Ti:Sapphire laser (Tsunami 3960, Spectra Physics Inc., Mountain View, CA) pumped by a high-power (5 W, 532 nm) laser (Millennia V, Spectra Physics Inc.) and an ultracompact confocal scanning head, Nikon PCM2000 (Nikon Instruments, Florence, Italy) using a single-pinhole design. Three-dimensional point-spread function has been measured to define spatial resolution performances. The TPE microscope has been used with a wide range of excitable fluorescent molecules (DAPI, Fura-2, Indo-1, DiOC(6)(3), fluoresceine, Texas red) covering a single photon spectral range from UV to green. An example is reported on 3D imaging of the helical structure of the sperm head of the Octopus Eledone cirrhosa labelled with an UV excitable dye, i.e., DAPI. The system can be easily switched for operating both in conventional and two-photon mode.

  15. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    Following the cancellation of the Constellation program and retirement of the Space Shuttle, NASA initiated the Space Launch System (SLS) program to provide next-generation heavy lift cargo and crew access to space. A key constituent of the SLS architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). The RS-25 was selected to serve as the main propulsion system for the SLS core stage in conjunction with the solid rocket boosters. This selection was largely based on the maturity and extensive experience gained through 135 missions, 3000+ ground tests, and over a million seconds total accumulated hot-fire time. In addition, there were also over a dozen functional flight assets remaining from the Space Shuttle program that could be leveraged to support the first four flights. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle boat-tail and installing it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration details involving changes to significant areas such as the environments, interface conditions, technical performance requirements, operational constraints and so on, there were other challenges to be overcome in the area of replacing the obsolete engine control system (ECS). While the magnitude of accomplishing this effort was less than that needed to develop and field a new clean-sheet engine system, the path to the first flight of SLS has not been without unexpected challenges.

  16. GENETIC ARCHITECTURE AND ADAPTIVE SIGNIFICANCE OF THE SELFING SYNDROME IN CAPSELLA

    PubMed Central

    Slotte, Tanja; Hazzouri, Khaled M.; Stern, David; Andolfatto, Peter; Wright, Stephen I.

    2016-01-01

    The transition from outcrossing to predominant self-fertilization is one of the most common evolutionary transitions in flowering plants. This shift is often accompanied by a suite of changes in floral and reproductive characters termed the selfing syndrome. Here, we characterize the genetic architecture and evolutionary forces underlying evolution of the selfing syndrome in Capsella rubella following its recent divergence from the outcrossing ancestor C. grandiflora. We conduct genotyping by multiplexed shotgun sequencing and map floral and reproductive traits in a large (N = 550) F2 population. Our results suggest that in contrast to previous studies of the selfing syndrome, changes at a few loci, some with major effects, have shaped the evolution of the selfing syndrome in Capsella. The directionality of QTL effects, as well as population genetic patterns of polymorphism and divergence at 318 loci, is consistent with a history of directional selection on the selfing syndrome. Our study is an important step toward characterizing the genetic basis and evolutionary forces underlying the evolution of the selfing syndrome in a genetically accessible model system. PMID:22519777

  17. High-resolution mapping of protein concentration reveals principles of proteome architecture and adaptation.

    PubMed

    Levy, Emmanuel D; Kowarzyk, Jacqueline; Michnick, Stephen W

    2014-05-22

    A single yeast cell contains a hundred million protein molecules. How these proteins are organized to orchestrate living processes is a central question in biology. To probe this organization in vivo, we measured the local concentration of proteins based on the strength of their nonspecific interactions with a neutral reporter protein. We first used a cytosolic reporter and measured local concentrations for ~2,000 proteins in S. cerevisiae, with accuracy comparable to that of mass spectrometry. Localizing the reporter to membranes specifically increased the local concentration measured for membrane proteins. Comparing the concentrations measured by both reporters revealed that encounter frequencies between proteins are primarily dictated by their abundances. However, to change these encounter frequencies and restructure the proteome, as in adaptation, we find that changes in localization have more impact than changes in abundance. These results highlight how protein abundance and localization contribute to proteome organization and reorganization.

  18. Human Behavior & Low Energy Architecture: Linking Environmental Adaptation, Personal Comfort, & Energy Use in the Built Environment

    NASA Astrophysics Data System (ADS)

    Langevin, Jared

    Truly sustainable buildings serve to enrich the daily sensory experience of their human inhabitants while consuming the least amount of energy possible; yet, building occupants and their environmentally adaptive behaviors remain a poorly characterized variable in even the most "green" building design and operation approaches. This deficiency has been linked to gaps between predicted and actual energy use, as well as to eventual problems with occupant discomfort, productivity losses, and health issues. Going forward, better tools are needed for considering the human-building interaction as a key part of energy efficiency strategies that promote good Indoor Environmental Quality (IEQ) in buildings. This dissertation presents the development and implementation of a Human and Building Interaction Toolkit (HABIT), a framework for the integrated simulation of office occupants' thermally adaptive behaviors, IEQ, and building energy use as part of sustainable building design and operation. Development of HABIT begins with an effort to devise more reliable methods for predicting individual occupants' thermal comfort, considered the driving force behind the behaviors of focus for this project. A long-term field study of thermal comfort and behavior is then presented, and the data it generates are used to develop and validate an agent-based behavior simulation model. Key aspects of the agent-based behavior model are described, and its predictive abilities are shown to compare favorably to those of multiple other behavior modeling options. Finally, the agent-based behavior model is linked with whole building energy simulation in EnergyPlus, forming the full HABIT program. The program is used to evaluate the energy and IEQ impacts of several occupant behavior scenarios in the simulation of a case study office building for the Philadelphia climate. Results indicate that more efficient local heating/cooling options may be paired with wider set point ranges to yield up to 24

  19. Interferometric measurement method of thin film thickness based on FFT

    NASA Astrophysics Data System (ADS)

    Shuai, Gaolong; Su, Junhong; Yang, Lihong; Xu, Junqi

    2009-05-01

    The kernel of modern interferometry is to obtain necessary surface shape and parameter by processing interferogram with reasonable algorithm. The paper studies the basic principle of interferometry involving 2-D FFT, proposes a new method for measuring thin film thickness based on FFT: by CCD receiving and acquired card collecting with the help of Twyman-Green interferometer, can a fringe interferogram of the measured thin film be obtained. Based on the interferogram processing knowledge, an algorithm processing software/program can be prepared to realize identification of the edge films, regional extension, filtering, unwrapping the wrapped phase etc. And in this way can the distribution of film information-coated surface be obtained and the thickness of thin film samples automatically measured. The findings indicate the PV value and RMS value of the measured film samples are 0.256 λ and 0.068 λ respectively and prove the new method has high precision.

  20. Application of FFT analyzed cardiac Doppler signals to fuzzy algorithm.

    PubMed

    Güler, Inan; Hardalaç, Firat; Barişçi, Necaattin

    2002-11-01

    Doppler signals, recorded from the output of tricuspid, mitral, and aorta valves of 60 patients, were transferred to a personal computer via 16-bit sound card. The fast Fourier transform (FFT) method was applied to the recorded signal from each patient. Since FFT method inherently cannot offer a good spectral resolution at highly turbulent blood flows, it sometimes leads to wrong interpretation of cardiac Doppler signals. In order to avoid this problem, firstly six known diseased heart signals such as hypertension, mitral stenosis, mitral failure, tricuspid stenosis, aorta stenosis, aorta insufficiency were introduced to fuzzy algorithm. Then, the unknown heart diseases from 15 patients were applied to the same fuzzy algorithm in order to detect the kinds of diseases. It is observed that the fuzzy algorithm gives true results for detecting the kind of diseases.

  1. Classification of EMG signals using PCA and FFT.

    PubMed

    Güler, Nihal Fatma; Koçer, Sabri

    2005-06-01

    In this study, the fast Fourier transform (FFT) analysis was applied to EMG signals recorded from ulnar nerves of 59 patients to interpret data. The data of the patients were diagnosed by the neurologists as 19 patients were normal, 20 patients had neuropathy and 20 patients had myopathy. The amount of FFT coefficients had been reduced by using principal components analysis (PCA). This would facilitate calculation and storage of EMG data. PCA coefficients were applied to multilayer perceptron (MLP) and support vector machine (SVM) and both classified systems of performance values were computed. Consequently, the results show that SVM has high anticipation level in the diagnosis of neuromuscular disorders. It is proved that its test performance is high compared with MLP.

  2. Saliency detection for videos using 3D FFT local spectra

    NASA Astrophysics Data System (ADS)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  3. Efficient Lookup Table-Based Adaptive Baseband Predistortion Architecture for Memoryless Nonlinearity

    NASA Astrophysics Data System (ADS)

    Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong

    2010-12-01

    Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.

  4. Neural self-adapting architecture for video-on-radio devices

    NASA Astrophysics Data System (ADS)

    Basti, Gianfranco; Perrone, Antonio L.

    2002-03-01

    In this paper we have sketched some technical details of an FM sub-carrier technology called Multi Purpose Radio Communication Channel (MPRC). This technology delivers actually data at maximum data rate of around 40 kbs using a proprietary codec algorithm: Subsidiary Communication Channel (SCC). A core device of this codec algorithm is a DWT compressor with a proprietary pre-processing, constituted by a neural self-adapting filter, the Dynamic Perceptron Algorithm (DPA), able to detect edges and to extract objects from the moving images flow, so to optimize the overall compression rate and the image quality. As a result it is possible to obtain video transmission in QCIF format at roughly 8/12 fps using 35 kHz of the 100 kHz available for a commercial FM radio station in Europe. This allows transmitting video on FM radio together with the usual radio broadcasting. On the contrary, if we use all the available 100 kHz., we obtain, after the charge related to the error protocol, a channel for compressed video transmission of about 113 kbit, allowing high quality 640x480 (zoomed or not zoomed) video images.

  5. Real-Time, Polyphase-FFT, 640-MHz Spectrum Analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, George A.; Garyantes, Michael F.; Grimm, Michael J.; Charny, Bentsian; Brown, Randy D.; Wilck, Helmut C.

    1994-01-01

    Real-time polyphase-fast-Fourier-transform, polyphase-FFT, spectrum analyzer designed to aid in detection of multigigahertz radio signals in two 320-MHz-wide polarization channels. Spectrum analyzer divides total spectrum of 640 MHz into 33,554,432 frequency channels of about 20 Hz each. Size and cost of polyphase-coefficient memory substantially reduced and much of processing loss of windowed FFTs eliminated.

  6. Sensor for Distance Estimation Using FFT of Images.

    PubMed

    Lázaro, José L; Cano, Angel E; Fernández, Pedro R; Luna, Carlos A

    2009-01-01

    In this paper, the problem of how to estimate the distance between an infrared emitter diode (IRED) and a camera from pixel grey-level intensities is examined from a practical standpoint. Magnitudes that affect grey level intensity were defined and related to the zero frequency component from the FFT image. A general model was also described and tested for distance estimation over the range from 420 to 800 cm using a differential methodology. Method accuracy is over 3%.

  7. Planar Arrays on Lattices and Their FFT Steering, a Primer

    DTIC Science & Technology

    2011-04-29

    by a tall B, so we cannot properly term it the inverse of B. A different terminology is called for. The Moore - Penrose pseudoinverse of any matrix B...lattice and using terminated guard elements at the array periphery. The second is multi-beam phase-shift steering of such arrays using generalized Cooley...FFT realization of the general multidimensional DFT for beam steering is developed using nested sublattice chains. The needed lattice basics are

  8. Adaptive line enhancers for fast acquisition

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1994-01-01

    Three adaptive line enhancer (ALE) algorithms and architectures - namely, conventional ALE, ALE with double filtering, and ALE with coherent accumulation - are investigated for fast carrier acquisition in the time domain. The advantages of these algorithms are their simplicity, flexibility, robustness, and applicability to general situations including the Earth-to-space uplink carrier acquisition and tracking of the spacecraft. In the acquisition mode, these algorithms act as bandpass filters; hence, the carrier-to-noise ratio (CNR) is improved for fast acquisition. In the tracking mode, these algorithms simply act as lowpass filters to improve signal-to-noise ratio; hence, better tracking performance is obtained. It is not necessary to have a priori knowledge of the received signal parameters, such as CNR, Doppler, and carrier sweeping rate. The implementation of these algorithms is in the time domain (as opposed to the frequency domain, such as the fast Fourier transform (FFT)). The carrier frequency estimation can be updated in real time at each time sample (as opposed to the batch processing of the FFT). The carrier frequency to be acquired can be time varying, and the noise can be non-Gaussian, nonstationary, and colored.

  9. Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture

    NASA Astrophysics Data System (ADS)

    Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-08-01

    The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).

  10. MRAG-I2D: Multi-resolution adapted grids for remeshed vortex methods on multicore architectures

    NASA Astrophysics Data System (ADS)

    Rossinelli, Diego; Hejazialhosseini, Babak; van Rees, Wim; Gazzola, Mattia; Bergdorf, Michael; Koumoutsakos, Petros

    2015-05-01

    We present MRAG-I2D, an open source software framework, for multiresolution simulations of two-dimensional, incompressible, viscous flows on multicore architectures. The spatiotemporal scales of the flow field are captured by remeshed vortex methods enhanced by high order average-interpolating wavelets and local time-stepping. The multiresolution solver of the Poisson equation relies on the development of a novel, tree-based multipole method. MRAG-I2D implements a number of HPC strategies to map efficiently the irregular computational workload of wavelet-adapted grids on multicore nodes. The capabilities of the present software are compared to the current state-of-the-art in terms of accuracy, compression rates and time-to-solution. Benchmarks include the inviscid evolution of an elliptical vortex, flow past an impulsively started cylinder at Re = 40- 40 000 and simulations of self-propelled anguilliform swimmers. The results indicate that the present software has the same or better accuracy than state-of-the-art solvers while it exhibits unprecedented performance in terms of time-to-solution.

  11. Adaptable dialog architecture and runtime engine (AdaRTE): a framework for rapid prototyping of health dialog systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2009-04-01

    Spoken dialog systems have been increasingly employed to provide ubiquitous access via telephone to information and services for the non-Internet-connected public. They have been successfully applied in the health care context; however, speech technology requires a considerable development investment. The advent of VoiceXML reduced the proliferation of incompatible dialog formalisms, at the expense of adding even more complexity. This paper introduces a novel architecture for dialogue representation and interpretation, AdaRTE, which allows developers to lay out dialog interactions through a high-level formalism, offering both declarative and procedural features. AdaRTE's aim is to provide a ground for deploying complex and adaptable dialogs whilst allowing experimentation and incremental adoption of innovative speech technologies. It enhances augmented transition networks with dynamic behavior, and drives multiple back-end realizers, including VoiceXML. It has been especially targeted to the health care context, because of the great scale and the need for reducing the barrier to a widespread adoption of dialog systems.

  12. Architectural and Biochemical Adaptations in Skeletal Muscle and Bone Following Rotator Cuff Injury in a Rat Model

    PubMed Central

    Sato, Eugene J.; Killian, Megan L.; Choi, Anthony J.; Lin, Evie; Choo, Alexander D.; Rodriguez-Soto, Ana E.; Lim, Chanteak T.; Thomopoulos, Stavros; Galatz, Leesa M.; Ward, Samuel R.

    2015-01-01

    Background: Injury to the rotator cuff can cause irreversible changes to the structure and function of the associated muscles and bones. The temporal progression and pathomechanisms associated with these adaptations are unclear. The purpose of this study was to investigate the time course of structural muscle and osseous changes in a rat model of a massive rotator cuff tear. Methods: Supraspinatus and infraspinatus muscle architecture and biochemistry and humeral and scapular morphological parameters were measured three days, eight weeks, and sixteen weeks after dual tenotomy with and without chemical paralysis via botulinum toxin A (BTX). Results: Muscle mass and physiological cross-sectional area increased over time in the age-matched control animals, decreased over time in the tenotomy+BTX group, and remained nearly the same in the tenotomy-alone group. Tenotomy+BTX led to increased extracellular collagen in the muscle. Changes in scapular bone morphology were observed in both experimental groups, consistent with reductions in load transmission across the joint. Conclusions: These data suggest that tenotomy alone interferes with normal age-related muscle growth. The addition of chemical paralysis yielded profound structural changes to the muscle and bone, potentially leading to impaired muscle function, increased muscle stiffness, and decreased bone strength. Clinical Relevance: Structural musculoskeletal changes occur after tendon injury, and these changes are severely exacerbated with the addition of neuromuscular compromise. PMID:25834081

  13. Architectural adaptation and protein expression patterns of Salmonella enterica serovar Enteritidis biofilms under laminar flow conditions.

    PubMed

    Mangalappalli-Illathu, Anil K; Lawrence, John R; Swerhone, George D W; Korber, Darren R

    2008-03-31

    Salmonella enterica serovar Enteritidis is a significant biofilm-forming pathogen. The influence of a 10-fold difference in nutrient laminar flow velocity on the dynamics of Salmonella Enteritidis biofilm formation and protein expression profiles were compared in order to ascertain how flow velocity influenced biofilm structure and function. Low-flow (0.007 cm s(-1)) biofilms consisted of diffusely-arranged microcolonies which grew until merging by approximately 72 h. High-flow (0.07 cm s(-1)) biofilms were significantly thicker (36+/-3 microm (arithmetic mean+/-standard error; n=225) versus 16+/-2 microm for low-flow biofilms at 120 h) and consisted of large bacterial mounds interspersed by water channels. Lectin-binding analysis of biofilm exopolymers revealed a significantly higher (P<0.05) proportion of N-acetylgalactosamine (GalNAc) in low-flow biofilms (55.2%), relative to only 1.2% in high-flow biofilms. Alternatively, the proportions of alpha-L-fucose and N-acetylglucosamine (GlcNAc2)-N-acetylneuraminic acid (NeuNAc) polymer-conjugates were significantly higher (P<0.05) in high-flow biofilms (69.1% and 29.6%, respectively) than low-flow biofilms (33.1% and 11.7%, respectively). Despite an apparent flow rate-based physiologic effect on biofilm structure and exopolymer composition, no major shift in whole-cell protein expression patterns was seen between 168 h-old low-flow and high-flow biofilms, and notably did not include any response involving the stress response proteins, DnaK, SodB, and Tpx. Proteins involved in degradation and energy metabolism (PduA, GapA, GpmA, Pgk, and RpiA), RNA and protein biosynthesis (Tsf, TufA, and RpoZ), cell processes (Crr, MalE, and PtsH), and adaptation (GrcA), and some hypothetical proteins (YcbL and YnaF) became up-regulated in both biofilm systems relative to a 168 h-old planktonic cell control. Our results indicate that Salmonella Enteritidis biofilms altered their structure and extracellular glycoconjugate composition

  14. A reconfigurable multicarrier demodulator architecture

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Jamali, M. M.

    1991-01-01

    An architecture based on parallel and pipline design approaches has been developed for the Frequency Division Multiple Access/Time Domain Multiplexed (FDMA/TDM) conversion system. The architecture has two main modules namely the transmultiplexer and the demodulator. The transmultiplexer has two pipelined modules. These are the shared multiplexed polyphase filter and the Fast Fourier Transform (FFT). The demodulator consists of carrier, clock, and data recovery modules which are interactive. Progress on the design of the MultiCarrier Demodulator (MCD) using commercially available chips and Application Specific Integrated Circuits (ASIC) and simulation studies using Viewlogic software will be presented at the conference.

  15. The TurboLAN project. Phase 1: Protocol choices for high speed local area networks. Phase 2: TurboLAN Intelligent Network Adapter Card, (TINAC) architecture

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The hardware and the software architecture of the TurboLAN Intelligent Network Adapter Card (TINAC) are described. A high level as well as detailed treatment of the workings of various components of the TINAC are presented. The TINAC is divided into the following four major functional units: (1) the network access unit (NAU); (2) the buffer management unit; (3) the host interface unit; and (4) the node processor unit.

  16. STS-48 Commander Creighton, in LES, stands at JSC FFT side hatch

    NASA Technical Reports Server (NTRS)

    1991-01-01

    STS-48 Discovery, Orbiter Vehicle (OV) 103, Commander John O. Creighton, wearing a launch and entry suit (LES), stands at the side hatch of JSC's full fuselage trainer (FFT). Creighton will enter the FFT shuttle mockup through the side hatch and take his assigned position on the forward flight deck. Creighton, along with the other crewmembers, is participating in a post-landing emergency egress exercise. The FFT is located in the Mockup and Integration Laboratory (MAIL) Bldg 9A.

  17. Accounting for pairwise distance restraints in FFT-based protein-protein docking.

    PubMed

    Xia, Bing; Vajda, Sandor; Kozakov, Dima

    2016-11-01

    ClusPro is a heavily used protein-protein docking server based on the fast Fourier transform (FFT) correlation approach. While FFT enables global docking, accounting for pairwise distance restraints using penalty terms in the scoring function is computationally expensive. We use a different approach and directly select low energy solutions that also satisfy the given restraints. As expected, accounting for restraints generally improves the rank of near native predictions, while retaining or even improving the numerical efficiency of FFT based docking.

  18. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  19. STS-29 MS Bagian during post landing egress exercises in JSC FFT mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-29 Discovery, Orbiter Vehicle (OV) 103, Mission Specialist (MS) James P. Bagian practices post landing egress via overhead window W8 in JSC full fuselage trainer (FFT) located in the Mockup and Integration Laboratory Bldg 9A. Bagian, wearing navy blue launch and entry suit (LES) and launch and entry helmet (LEH), lowers himself using the sky genie down FFT side. Technicians watch Bagian's progress from outside FFT and inside FFT at open side hatch. Visible on Bagian's right side is the personal egress air pack (PEAP). Bagian along with the engineers are evaluating egress using the new crew escape system (CES) equipment (including parachute harness).

  20. Microprocessor implementation of an FFT for ionospheric VLF observations

    NASA Technical Reports Server (NTRS)

    Elvidge, J.; Kintner, P.; Holzworth, R.

    1984-01-01

    A fast Fourier transform algorithm is implemented on a CMOS microprocessor for application to very low-frequency electric fields (less than 10 kHz) sensed on high-altitude scientific balloons. Two FFT's are calculated simultaneously by associating them with conjugate symmetric and conjugate antisymmetric results. One goal of the system was to detect spectral signatures associated with fast time variations present in natural signals such as whistlers and chorus. Although a full evaluation of the system was not possible for operational reasons, a measure of the system's success has been defined and evaluated.

  1. Efficient modelling of gravity effects due to topographic masses using the Gauss-FFT method

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2016-04-01

    We present efficient Fourier-domain algorithms for modelling gravity effects due to topographic masses. The well-known Parker's formula originally based on the standard fast Fourier transform (FFT) algorithm is modified by applying the Gauss-FFT method instead. Numerical precision of the forward and inverse Fourier transforms embedded in Parker's formula and its extended forms are significantly improved by the Gauss-FFT method. The topographic model is composed of two major aspects, the geometry and the density. Versatile geometric representations, including the mass line model, the mass prism model, the polyhedron model and smoother topographic models interpolated from discrete data sets using high-order splines or pre-defined by analytical functions, in combination with density distributions that vary both laterally and vertically in rather arbitrary ways following exponential or general polynomial functions, now can be treated in a consistent framework by applying the Gauss-FFT method. The method presented has been numerically checked by space-domain analytical and hybrid analytical/numerical solutions already established in the literature. Synthetic and real model tests show that both the Gauss-FFT method and the standard FFT method run much faster than space-domain solutions, with the Gauss-FFT method being superior in numerical accuracy. When truncation errors are negligible, the Gauss-FFT method can provide forward results almost identical to space-domain analytical or semi-numerical solutions in much less time.

  2. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  3. A Method for Finding Unknown Signals Using Reinforcement FFT Differencing

    SciTech Connect

    Charles R. Tolle; John W. James

    2009-12-01

    This note addresses a simple yet powerful method of discovering the spectral character of an unknown but intermittent signal buried in a background made up of a distribution of other signals. Knowledge of when the unknown signal is present and when it is not, along with samples of the combined signal when the unknown signal is present and when it is not are all that is necessary for this method. The method is based on reinforcing Fast Fourier Transform (FFT) power spectra when the signal of interest occurs and subtracting spectra when it does not. Several examples are presented. This method could be used to discover spectral components of unknown chemical species within spectral analysis instruments such as Mass Spectroscopy, Fourier Transform Infrared Spectroscopy (FTIR) and Gas Chromatography. In addition, this method can be used to isolate device loading signatures on power transmission lines.

  4. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  5. High order finite volume methods on wavelet-adapted grids with local time-stepping on multicore architectures for the simulation of shock-bubble interactions

    NASA Astrophysics Data System (ADS)

    Hejazialhosseini, Babak; Rossinelli, Diego; Bergdorf, Michael; Koumoutsakos, Petros

    2010-11-01

    We present a space-time adaptive solver for single- and multi-phase compressible flows that couples average interpolating wavelets with high-order finite volume schemes. The solver introduces the concept of wavelet blocks, handles large jumps in resolution and employs local time-stepping for efficient time integration. We demonstrate that the inherently sequential wavelet-based adaptivity can be implemented efficiently in multicore computer architectures using task-based parallelism and introducing the concept of wavelet blocks. We validate our computational method on a number of benchmark problems and we present simulations of shock-bubble interaction at different Mach numbers, demonstrating the accuracy and computational performance of the method.

  6. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    PubMed

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.

  7. Software Technology for Adaptable, Reliable Systems (STARS). Software Architecture Seminar Report: Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-01-29

    designs and approaches without some stndaid defining the quality ot the product. Sotware development Is just beginning to have such a standaid. "* No...non-fwuctionhal qual- ity featurses Instead of post-modtem evaluaton of afticli quality factors the approach described budis -satis- facbWo into the...Architectures Defined ..................................................................................... 4 2.2.3 CARDS Approach

  8. STS-48 Pilot Reightler and MS Brown, in LESs, stand at JSC FFT side hatch

    NASA Technical Reports Server (NTRS)

    1991-01-01

    STS-48 Discovery, Orbiter Vehicle (OV) 103, Pilot Kenneth S. Reightler, Jr (left) and Mission Specialist (MS) Mark N. Brown, wearing launch and entry suits (LESs), stand at the side hatch of JSC's full fuselage trainer (FFT). The crewmembers will enter the FFT shuttle mockup through the side hatch and take their assigned descent (landing) positions in the crew cabin. Reightler and Brown, along with the other crewmembers, are participating in a post-landing emergency egress exercise. The FFT is located in the Mockup and Integration Laboratory (MAIL) Bldg 9A.

  9. High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.

    PubMed

    Coggins, Brian E; Zhou, Pei

    2008-12-01

    Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.

  10. Modeling the two-locus architecture of divergent pollinator adaptation: how variation in SAD paralogs affects fitness and evolutionary divergence in sexually deceptive orchids

    PubMed Central

    Xu, Shuqing; Schlüter, Philipp M

    2015-01-01

    Divergent selection by pollinators can bring about strong reproductive isolation via changes at few genes of large effect. This has recently been demonstrated in sexually deceptive orchids, where studies (1) quantified the strength of reproductive isolation in the field; (2) identified genes that appear to be causal for reproductive isolation; and (3) demonstrated selection by analysis of natural variation in gene sequence and expression. In a group of closely related Ophrys orchids, specific floral scent components, namely n-alkenes, are the key floral traits that control specific pollinator attraction by chemical mimicry of insect sex pheromones. The genetic basis of species-specific differences in alkene production mainly lies in two biosynthetic genes encoding stearoyl–acyl carrier protein desaturases (SAD) that are associated with floral scent variation and reproductive isolation between closely related species, and evolve under pollinator-mediated selection. However, the implications of this genetic architecture of key floral traits on the evolutionary processes of pollinator adaptation and speciation in this plant group remain unclear. Here, we expand on these recent findings to model scenarios of adaptive evolutionary change at SAD2 and SAD5, their effects on plant fitness (i.e., offspring number), and the dynamics of speciation. Our model suggests that the two-locus architecture of reproductive isolation allows for rapid sympatric speciation by pollinator shift; however, the likelihood of such pollinator-mediated speciation is asymmetric between the two orchid species O. sphegodes and O. exaltata due to different fitness effects of their predominant SAD2 and SAD5 alleles. Our study not only provides insight into pollinator adaptation and speciation mechanisms of sexually deceptive orchids but also demonstrates the power of applying a modeling approach to the study of pollinator-driven ecological speciation. PMID:25691974

  11. Modeling the two-locus architecture of divergent pollinator adaptation: how variation in SAD paralogs affects fitness and evolutionary divergence in sexually deceptive orchids.

    PubMed

    Xu, Shuqing; Schlüter, Philipp M

    2015-01-01

    Divergent selection by pollinators can bring about strong reproductive isolation via changes at few genes of large effect. This has recently been demonstrated in sexually deceptive orchids, where studies (1) quantified the strength of reproductive isolation in the field; (2) identified genes that appear to be causal for reproductive isolation; and (3) demonstrated selection by analysis of natural variation in gene sequence and expression. In a group of closely related Ophrys orchids, specific floral scent components, namely n-alkenes, are the key floral traits that control specific pollinator attraction by chemical mimicry of insect sex pheromones. The genetic basis of species-specific differences in alkene production mainly lies in two biosynthetic genes encoding stearoyl-acyl carrier protein desaturases (SAD) that are associated with floral scent variation and reproductive isolation between closely related species, and evolve under pollinator-mediated selection. However, the implications of this genetic architecture of key floral traits on the evolutionary processes of pollinator adaptation and speciation in this plant group remain unclear. Here, we expand on these recent findings to model scenarios of adaptive evolutionary change at SAD2 and SAD5, their effects on plant fitness (i.e., offspring number), and the dynamics of speciation. Our model suggests that the two-locus architecture of reproductive isolation allows for rapid sympatric speciation by pollinator shift; however, the likelihood of such pollinator-mediated speciation is asymmetric between the two orchid species O. sphegodes and O. exaltata due to different fitness effects of their predominant SAD2 and SAD5 alleles. Our study not only provides insight into pollinator adaptation and speciation mechanisms of sexually deceptive orchids but also demonstrates the power of applying a modeling approach to the study of pollinator-driven ecological speciation.

  12. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  13. A study of the spectral broadening of simulated Doppler signals using FFT and AR modelling.

    PubMed

    Keeton, P I; Schlindwein, F S; Evans, D H

    1997-01-01

    Doppler ultrasound is used clinically to detect stenosis in the carotid artery. The presence of stenosis may be identified by disturbed flow patterns distal to the stenosis that cause spectral broadening in the spectrum of the Doppler signal around peak systole. This paper investigates the behaviour of the spectral broadening index (SBI) derived from wide-band spectra obtained using autoregressive modelling (AR), compared with the SBI based on the fast-Fourier transform (FFT) spectra. Simulated Doppler signals were created using white noise and shaped filters to analyse spectra typically found around the systolic peak and to assess the magnitude and variance of AR and FFT-SBI for a range of signal-to-noise ratios. The results of the analysis show a strong correlation between the indices calculated using the FFT and AR algorithms. Despite the qualitative improvement of the AR spectra over the FFT, the estimation of SBI for short data frames is not significantly improved using AR.

  14. Chromatin remodeller Fun30Fft3 induces nucleosome disassembly to facilitate RNA polymerase II elongation

    PubMed Central

    Lee, Junwoo; Shik Choi, Eun; David Seo, Hogyu; Kang, Keunsoo; Gilmore, Joshua M.; Florens, Laurence; Washburn, Michael P.; Choe, Joonho; Workman, Jerry L.; Lee, Daeyoup

    2017-01-01

    Previous studies have revealed that nucleosomes impede elongation of RNA polymerase II (RNAPII). Recent observations suggest a role for ATP-dependent chromatin remodellers in modulating this process, but direct in vivo evidence for this is unknown. Here using fission yeast, we identify Fun30Fft3 as a chromatin remodeller, which localizes at transcribing regions to promote RNAPII transcription. Fun30Fft3 associates with RNAPII and collaborates with the histone chaperone, FACT, which facilitates RNAPII elongation through chromatin, to induce nucleosome disassembly at transcribing regions during RNAPII transcription. Mutants, resulting in reduced nucleosome-barrier, such as deletion mutants of histones H3/H4 themselves and the genes encoding components of histone deacetylase Clr6 complex II suppress the defects in growth and RNAPII occupancy of cells lacking Fun30Fft3. These data suggest that RNAPII utilizes the chromatin remodeller, Fun30Fft3, to overcome the nucleosome barrier to transcription elongation. PMID:28218250

  15. Design and Performance of Overlap FFT Filter-Bank for Dynamic Spectrum Access Applications

    NASA Astrophysics Data System (ADS)

    Tanabe, Motohiro; Umehira, Masahiro

    An OFDMA-based (Orthogonal Frequency Division Multiple Access-based) channel access scheme for dynamic spectrum access has the drawbacks of large PAPR (Peak to Average Power Ratio) and large ACI (Adjacent Channel Interference). To solve these problems, a flexible channel access scheme using an overlap FFT filter-bank was proposed based on single carrier modulation for dynamic spectrum access. In order to apply the overlap FFT filter-bank for dynamic spectrum access, it is necessary to clarify the performance of the overlap FFT filter-bank according to the design parameters since its frequency characteristics are critical for dynamic spectrum access applications. This paper analyzes the overlap FFT filter-bank and evaluates its performance such as frequency characteristics and ACI performance according to the design parameters.

  16. STS-33 EVA Prep and Post with Gregory, Blaha, Carter, Thorton, and Musgrave in FFT

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This video shows the crew in the airlock of the FFT, talking with technicians about the extravehicular activity (EVA) equipment. Thornton and Carter put on EVA suits and enter the airlock as the other crew members help with checklists.

  17. On the application of pseudo-spectral FFT technique to non-periodic problems

    NASA Technical Reports Server (NTRS)

    Biringen, S.; Kao, K. H.

    1988-01-01

    The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.

  18. A Portable 3D FFT Package for Distributed-Memory Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Ding, H. Q.; Ferraro, R. D.; Gennery, D. B.

    1995-01-01

    A parallel algorithm for 3D FFTs is implemented as a series of local 1D FFTs combined with data transposes. This allows the use of vendor supplied (often fully optimized) sequential 1D FFTs. The FFTs are carried out in-place by using an in-place data transpose across the processors.

  19. Effects of Computer Architecture on FFT (Fast Fourier Transform) Algorithm Performance.

    DTIC Science & Technology

    1983-12-01

    8217TEST WJITH T -’,FIO.2.’ DELT *’,F10.2,’LIST OPT:’,13) LCOUIIT - 0 Ill CONINIUE LCOL’NT - LCOUNT + I NPTS - (T/DELT) + 6.4 DELF - 1.6/T PI...Hi - (T/DELT) + 0.4 DELF - 1.EWT PI - 3.141592653898 E - 2.71828 FO 25.0 DO 10 1-1,N TI - (I-I)ATELT ARG - 2.6-PI*.FD*TI TE - -1.094TI XI(I) - 0 10...CHANGED FOR DIFFERENT LENGTH! Ni 1) -5, N: 2) wE L’HSC - 9 C C DEF:NE THE FRE’.USNCY RECORD LINGTH,. DELF : C lELFul .0/T C GStiZ’.P.T THE A7FAY TO BE

  20. A finite element conjugate gradient FFT method for scattering

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Zapp, John; Hsa, Chang-Yu; Volakis, John L.

    1990-01-01

    An extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation (FFT) is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. By virtue of the finite element method, the algorithm is applicable to structures of arbitrary material composition. Several improvements to the two dimensional algorithm are also described. These include: (1) modifications for terminating the mesh at circular boundaries without distorting the convolutionality of the boundary integrals; (2) the development of nonproprietary mesh generation routines for two dimensional applications; (3) the development of preprocessors for interfacing SDRC IDEAS with the main algorithm; and (4) the development of post-processing algorithms based on the public domain package GRAFIC to generate two and three dimensional gray level and color field maps.

  1. Low power reconfigurable FP-FFT core with an array of folded DA butterflies

    NASA Astrophysics Data System (ADS)

    Beulet Paul, Augusta Sophy; Raju, Srinivasan; Janakiraman, Raja

    2014-12-01

    A variable length (32 ~ 2,048), low power, floating point fast Fourier transform (FP-FFT) processor is designed and implemented using energy-efficient butterfly elements. The butterfly elements are implemented using distributed arithmetic (DA) algorithm that eliminates the power-consuming complex multipliers. The FFT computations are scheduled in a quasi-parallel mode with an array of 16 butterflies. The nodes of the data flow graph (DFG) of the FFT are folded to these 16 butterflies for any value of N by the control unit. Register minimization is also applied after folding to decrease the number of scratch pad registers to (log 2 N - 1) × 16. The real and imaginary parts of the samples are represented by 32-bit single-precision floating point notation to achieve high precision in the results. Thus, each sample is represented using 64 bits. Twiddle factor ROM size is reduced by 25% using the symmetry of the twiddle factors. Reconfigurability based on the sample size is achieved by the control unit. This distributed floating point arithmetic (DFPA)-based design of FFT processor implemented in 45-nm process occupies an area of 0.973 mm2 and dissipates a power of 68 mW at an operating frequency of 100 MHz. When compared with FFT processor designed in the same technology with multiplier-based butterflies, this design shows 33% less area and 38% less power. The throughput for 2,048-point FFT is 222 KS/s and the energy spent per FFT is 7.4 to 14 nJ for 64 to 2,048 points being one among the most energy-efficient FFT processors.

  2. Use of FFT-Based Measuring Instruments for EMI Compliance Measurements

    NASA Astrophysics Data System (ADS)

    Keller, Matthias; Medler, Jens

    2016-05-01

    The use of FFT-based measuring receivers for EMI compliance measurements is motivated by the desire to reduce the scan time by several orders of magnitude and to gain additional insights by applying longer measurement times or using enhanced methods like scan spectrogram and persistence mode. Usage of an appropriate measurement time is the key to comprehensively record the disturbance characteristic of the equipment under test (EUT). The practical use of FFT-based scan spectrogram and persistence mode is demonstrated.

  3. Group 12, 1987 ASCAN C. Michael Foale sits at the pilots station in JSC's FFT

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Group 12, 1987 Astronaut Candidate (ASCAN) C. Michael Foale sits at the forward flight deck pilots station controls in JSC's Full Fuselage Trainer (FFT). The FFT is used to familiarize the astronauts with the hardware in the cockpit of the Space Shuttle orbiters. It is one of the mockup training devices located in the Mockup and Integration Laboratory (MAIL) Bldg 9NE. Foale is one of 15 ASCANs recently selected by NASA.

  4. [Design of the 2D-FFT image reconstruction software based on Matlab].

    PubMed

    Xu, Hong-yu; Wang, Hong-zhi

    2008-09-01

    This paper presents a Matlab's implementation for 2D-FFT image reconstruction algorithm of magnetic resonance imaging, with the universal COM component that Windows system can identify. This allows to segregate the 2D-FFT image reconstruction algorithm from the business magnetic resonance imaging closed system, providing the ability for initial data processing before reconstruction, which would be important for improving the image quality, diagnostic value and image post-processing.

  5. Heart rate variability in passive tilt test: comparative evaluation of autoregressive and FFT spectral analyses.

    PubMed

    Badilini, F; Maison-Blanche, P; Coumel, P

    1998-05-01

    The dynamic response of the autonomic nervous system during tilting is assessed by changes in the low (LF) and high frequency (HF) components of the RR series power spectral density (PSD). Although results of many studies are consistent, some doubts related to different methodologies remain. Specifically, the respective relevance of autoregressive (AR) and fast Fourier transform (FFT) methods is often questioned. Beat-to-beat RR series were recorded during 90 degrees passive tilt in 18 healthy subjects (29 +/- 5 years, eight females). FFT-based (50% overlap, Hanning window) and AR-based (Levinson-Durbin algorithm) PSDs were calculated on the same RR intervals. Powers in very low frequency (VLF: < 0.04 Hz), LF (0.04-0.15 Hz), and HF (0.15-0.40 Hz) bands were calculated either by spectrum integration (FFT and ARIN), by considering the highest AR component in each band (ARHP), or by summation of all AR components (ARAP). LF and HF raw powers (ms2) were normalized by total power (%P) and by total power after removal of the VLF component (nu). AR and FFT total powers were not different, regardless of body position. In supine condition, when compared to ARHP and ARAP, FFT underestimated VLF and overestimated LF, whereas in tilt position FFT overestimated HF and underestimated LF. However, supine/tilt trends were consistent in all methods showing a clear reduction of HF and a less marked increase of LF. Both normalization procedures provided a significant LF increase and further magnified the HF decrease. Results obtained with ARIN were remarkably close to those obtained with FFT. In conclusion, significant differences between AR and FFT spectral analyses do exist, particularly in supine position. Nevertheless, dynamic trends provided by the two approaches are consistent. Normalization is necessary to evidence the LF increase during tilt.

  6. 1-FFT amino acids involved in high DP inulin accumulation in Viguiera discolor

    PubMed Central

    De Sadeleer, Emerik; Vergauwen, Rudy; Struyf, Tom; Le Roy, Katrien; Van den Ende, Wim

    2015-01-01

    Fructans are important vacuolar reserve carbohydrates with drought, cold, ROS and general abiotic stress mediating properties. They occur in 15% of all flowering plants and are believed to display health benefits as a prebiotic and dietary fiber. Fructans are synthesized by specific fructosyltransferases and classified based on the linkage type between fructosyl units. Inulins, one of these fructan types with β(2-1) linkages, are elongated by fructan:fructan 1-fructosyltransferases (1-FFT) using a fructosyl unit from a donor inulin to elongate the acceptor inulin molecule. The sequence identity of the 1-FFT of Viguiera discolor (Vd) and Helianthus tuberosus (Ht) is 91% although these enzymes produce distinct fructans. The Vd 1-FFT produces high degree of polymerization (DP) inulins by preferring the elongation of long chain inulins, in contrast to the Ht 1-FFT which prefers small molecules (DP3 or 4) as acceptor. Since higher DP inulins have interesting properties for industrial, food and medical applications, we report here on the influence of two amino acids on the high DP inulin production capacity of the Vd 1-FFT. Introducing the M19F and H308T mutations in the active site of the Vd 1-FFT greatly reduces its capacity to produce high DP inulin molecules. Both amino acids can be considered important to this capacity, although the double mutation had a much higher impact than the single mutations. PMID:26322058

  7. 1-FFT amino acids involved in high DP inulin accumulation in Viguiera discolor.

    PubMed

    De Sadeleer, Emerik; Vergauwen, Rudy; Struyf, Tom; Le Roy, Katrien; Van den Ende, Wim

    2015-01-01

    Fructans are important vacuolar reserve carbohydrates with drought, cold, ROS and general abiotic stress mediating properties. They occur in 15% of all flowering plants and are believed to display health benefits as a prebiotic and dietary fiber. Fructans are synthesized by specific fructosyltransferases and classified based on the linkage type between fructosyl units. Inulins, one of these fructan types with β(2-1) linkages, are elongated by fructan:fructan 1-fructosyltransferases (1-FFT) using a fructosyl unit from a donor inulin to elongate the acceptor inulin molecule. The sequence identity of the 1-FFT of Viguiera discolor (Vd) and Helianthus tuberosus (Ht) is 91% although these enzymes produce distinct fructans. The Vd 1-FFT produces high degree of polymerization (DP) inulins by preferring the elongation of long chain inulins, in contrast to the Ht 1-FFT which prefers small molecules (DP3 or 4) as acceptor. Since higher DP inulins have interesting properties for industrial, food and medical applications, we report here on the influence of two amino acids on the high DP inulin production capacity of the Vd 1-FFT. Introducing the M19F and H308T mutations in the active site of the Vd 1-FFT greatly reduces its capacity to produce high DP inulin molecules. Both amino acids can be considered important to this capacity, although the double mutation had a much higher impact than the single mutations.

  8. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  9. FFT applications to plane-polar near-field antenna measurements

    NASA Technical Reports Server (NTRS)

    Gatti, Mark S.; Rahmat-Samii, Yahya

    1988-01-01

    The four-point bivariate Lagrange interpolation algorithm was applied to near-field antenna data measured in a plane-polar facility. The results were sufficiently accurate to permit the use of the FFT (fast Fourier transform) algorithm to calculate the far-field patterns of the antenna. Good agreement was obtained between the far-field patterns as calculated by the Jacobi-Bessel and the FFT algorithms. The significant advantage in using the FFT is in the calculation of the principal plane cuts, which may be made very quickly. Also, the application of the FFT algorithm directly to the near-field data was used to perform surface holographic diagnosis of a reflector antenna. The effects due to the focusing of the emergent beam from the reflector, as well as the effects of the information in the wide-angle regions, are shown. The use of the plane-polar near-field antenna test range has therfore been expanded to include these useful FFT applications.

  10. Molecular and functional characterization of a cDNA encoding fructan:fructan 6G-fructosyltransferase (6G-FFT)/fructan:fructan 1-fructosyltransferase (1-FFT) from perennial ryegrass (Lolium perenne L.).

    PubMed

    Lasseur, Bertrand; Lothier, Jérémy; Djoumad, Abdelmadjid; De Coninck, Barbara; Smeekens, Sjef; Van Laere, André; Morvan-Bertrand, Annette; Van den Ende, Wim; Prud'homme, Marie-Pascale

    2006-01-01

    Fructans are the main storage compound in Lolium perenne. To account for the prevailing neokestose-based fructan synthesis in this species, a cDNA library of L. perenne was screened by using the onion (Allium cepa) fructan:fructan 6G-fructosyltransferase (6G-FFT) as a probe. A full length Lp6G-FFT clone was isolated with significant homologies to vacuolar type fructosyltransferases and invertases. The functionality of the cDNA was tested by heterologous expression in Pichia pastoris. The recombinant protein demonstrated both 6G-FFT and fructan:fructan 1-fructosyltransferase activities (1-FFT) with a maximum 6G-FFT/1-FFT ratio of two. The activity of 6G-FFT was investigated with respect to developmental stage, tissue distribution, and alterations in carbohydrate status expression and compared to sucrose:sucrose 1-fructosyltransferase (1-SST). Lp6G-FFT and Lp1-SST were predominantly expressed in the basal part of elongating leaves and leaf sheaths. Expression of both genes declined along the leaf axis, in parallel with the spatial occurrence of fructan and fructosyltransferase activities. Surprisingly, Lp6G-FFT was highly expressed in photosynthetically active tissues where very low extractable fructosyltransferase activity and fructan amounts were detected, suggesting a post-transcriptional regulation of expression. Lp6G-FFT gene expression increased only in elongating leaves following similar increases of sucrose content in blades, sheaths, and elongating leaf bases. Regulation of Lp6G-FFT gene expression depends on the tissue according to its sink-source status.

  11. Genetic Architecture of Contemporary Adaptation to Biotic Invasions: Quantitative Trait Locus Mapping of Beak Reduction in Soapberry Bugs

    PubMed Central

    Yu, Y.; Andrés, Jose A.

    2013-01-01

    Biological invasions can result in new selection pressures driven by the establishment of new biotic interactions. The response of exotic and native species to selection depends critically on the genetic architecture of ecologically relevant traits. In the Florida peninsula, the soapberry bug (Jadera haematoloma) has colonized the recently introduced Chinese flametree, Koelreuteria elegans, as a host plant. Driven by feeding efficiency, the populations associated with this new host have differentiated into a new bug ecomorph characterized by short beaks more appropriate for feeding on the flattened pods of the Chinese flametree. In this study, we have generated a three-generation pedigree from crossing the long-beaked and short-beaked ecomorphs to construct a de novo linkage map and to locate putative quantitative trait locus (QTL) controlling beak length and body size in J. haematoloma. Using amplified fragment-length polymorphism markers and a two-way pseudo-testcross design, we have produced two parental maps in six linkage groups, covering the known number of chromosomes. QTL analysis revealed one significant QTL for beak length on a maternal linkage group and the corresponding paternal linkage group. Three QTL were found for body size. Through single marker regression analysis, nine single markers that could not be placed on the map were also found to be significantly associated with one or both of the two traits. Interestingly, the most significant body size QTL co-localized with the beak length QTL, suggesting linkage disequilibrium or pleiotropic effects of related traits. Our results suggest an oligogenic control of beak length. PMID:24347624

  12. Genetic architecture of contemporary adaptation to biotic invasions: quantitative trait locus mapping of beak reduction in soapberry bugs.

    PubMed

    Yu, Y; Andrés, Jose A

    2014-02-19

    Biological invasions can result in new selection pressures driven by the establishment of new biotic interactions. The response of exotic and native species to selection depends critically on the genetic architecture of ecologically relevant traits. In the Florida peninsula, the soapberry bug (Jadera haematoloma) has colonized the recently introduced Chinese flametree, Koelreuteria elegans, as a host plant. Driven by feeding efficiency, the populations associated with this new host have differentiated into a new bug ecomorph characterized by short beaks more appropriate for feeding on the flattened pods of the Chinese flametree. In this study, we have generated a three-generation pedigree from crossing the long-beaked and short-beaked ecomorphs to construct a de novo linkage map and to locate putative quantitative trait locus (QTL) controlling beak length and body size in J. haematoloma. Using amplified fragment-length polymorphism markers and a two-way pseudo-testcross design, we have produced two parental maps in six linkage groups, covering the known number of chromosomes. QTL analysis revealed one significant QTL for beak length on a maternal linkage group and the corresponding paternal linkage group. Three QTL were found for body size. Through single marker regression analysis, nine single markers that could not be placed on the map were also found to be significantly associated with one or both of the two traits. Interestingly, the most significant body size QTL co-localized with the beak length QTL, suggesting linkage disequilibrium or pleiotropic effects of related traits. Our results suggest an oligogenic control of beak length.

  13. Schema-based learning of adaptable and flexible prey-catching in anurans I. The basic architecture.

    PubMed

    Corbacho, Fernando; Nishikawa, Kiisa C; Weerasuriya, Ananda; Liaw, Jim-Shih; Arbib, Michael A

    2005-12-01

    A motor action often involves the coordination of several motor synergies and requires flexible adjustment of the ongoing execution based on feedback signals. To elucidate the neural mechanisms underlying the construction and selection of motor synergies, we study prey-capture in anurans. Experimental data demonstrate the intricate interaction between different motor synergies, including the interplay of their afferent feedback signals (Weerasuriya 1991; Anderson and Nishikawa 1996). Such data provide insights for the general issues concerning two-way information flow between sensory centers, motor circuits and periphery in motor coordination. We show how different afferent feedback signals about the status of the different components of the motor apparatus play a critical role in motor control as well as in learning. This paper, along with its companion paper, extend the model by Liaw et al. (1994) by integrating a number of different motor pattern generators, different types of afferent feedback, as well as the corresponding control structure within an adaptive framework we call Schema-Based Learning. We develop a model of the different MPGs involved in prey-catching as a vehicle to investigate the following questions: What are the characteristic features of the activity of a single muscle? How can these features be controlled by the premotor circuit? What are the strategies employed to generate and synchronize motor synergies? What is the role of afferent feedback in shaping the activity of a MPG? How can several MPGs share the same underlying circuitry and yet give rise to different motor patterns under different input conditions? In the companion paper we also extend the model by incorporating learning components that give rise to more flexible, adaptable and robust behaviors. To show these aspects we incorporate studies on experiments on lesions and the learning processes that allow the animal to recover its proper functioning.

  14. Genetic architecture of local adaptation in lunar and diurnal emergence times of the marine midge Clunio marinus (Chironomidae, Diptera).

    PubMed

    Kaiser, Tobias S; Heckel, David G

    2012-01-01

    Circadian rhythms pre-adapt the physiology of most organisms to predictable daily changes in the environment. Some marine organisms also show endogenous circalunar rhythms. The genetic basis of the circalunar clock and its interaction with the circadian clock is unknown. Both clocks can be studied in the marine midge Clunio marinus (Chironomidae, Diptera), as different populations have different local adaptations in their lunar and diurnal rhythms of adult emergence, which can be analyzed by crossing experiments. We investigated the genetic basis of population variation in clock properties by constructing the first genetic linkage map for this species, and performing quantitative trait locus (QTL) analysis on variation in both lunar and diurnal timing. The genome has a genetic length of 167-193 centimorgans based on a linkage map using 344 markers, and a physical size of 95-140 megabases estimated by flow cytometry. Mapping the sex determining locus shows that females are the heterogametic sex, unlike most other Chironomidae. We identified two QTL each for lunar emergence time and diurnal emergence time. The distribution of QTL confirms a previously hypothesized genetic basis to a correlation of lunar and diurnal emergence times in natural populations. Mapping of clock genes and light receptors identified ciliary opsin 2 (cOps2) as a candidate to be involved in both lunar and diurnal timing; cryptochrome 1 (cry1) as a candidate gene for lunar timing; and two timeless (tim2, tim3) genes as candidate genes for diurnal timing. This QTL analysis of lunar rhythmicity, the first in any species, provides a unique entree into the molecular analysis of the lunar clock.

  15. Genetic Architecture of Local Adaptation in Lunar and Diurnal Emergence Times of the Marine Midge Clunio marinus (Chironomidae, Diptera)

    PubMed Central

    Kaiser, Tobias S.; Heckel, David G.

    2012-01-01

    Circadian rhythms pre-adapt the physiology of most organisms to predictable daily changes in the environment. Some marine organisms also show endogenous circalunar rhythms. The genetic basis of the circalunar clock and its interaction with the circadian clock is unknown. Both clocks can be studied in the marine midge Clunio marinus (Chironomidae, Diptera), as different populations have different local adaptations in their lunar and diurnal rhythms of adult emergence, which can be analyzed by crossing experiments. We investigated the genetic basis of population variation in clock properties by constructing the first genetic linkage map for this species, and performing quantitative trait locus (QTL) analysis on variation in both lunar and diurnal timing. The genome has a genetic length of 167–193 centimorgans based on a linkage map using 344 markers, and a physical size of 95–140 megabases estimated by flow cytometry. Mapping the sex determining locus shows that females are the heterogametic sex, unlike most other Chironomidae. We identified two QTL each for lunar emergence time and diurnal emergence time. The distribution of QTL confirms a previously hypothesized genetic basis to a correlation of lunar and diurnal emergence times in natural populations. Mapping of clock genes and light receptors identified ciliary opsin 2 (cOps2) as a candidate to be involved in both lunar and diurnal timing; cryptochrome 1 (cry1) as a candidate gene for lunar timing; and two timeless (tim2, tim3) genes as candidate genes for diurnal timing. This QTL analysis of lunar rhythmicity, the first in any species, provides a unique entree into the molecular analysis of the lunar clock. PMID:22384150

  16. Improving the direct-methods sign-unconstrained S-FFT algorithm. XV.

    PubMed

    Rius, Jordi; Frontera, Carles

    2009-11-01

    In order to extend the application field of the direct-methods S-FFT phase-refinement algorithm to density functions with positive and negative peaks, the equal-sign constraint was removed from its definition by combining rho(2) with an appropriate density function mask [Rius & Frontera (2008). Acta Cryst. A64, 670-674]. This generalized algorithm (S(2)-FFT) was shown to be highly effective for crystal structures with at least one moderate scatterer in the unit cell but less effective when applied to structures with only light scatterers. To increase the success rate in this second case, the mask has been improved and the convergence rate of S(2)-FFT has been investigated. Finally, a closely related but simpler phase-refinement function (S(m)) combining rho (instead of rho(2)) with a new mask is introduced. For simple cases at least this can also treat density peaks in the absence of the equal-sign constraint.

  17. Pipelined digital SAR azimuth correlator using hybrid FFT-transversal filter

    NASA Technical Reports Server (NTRS)

    Wu, C.; Liu, K. Y. (Inventor)

    1984-01-01

    A synthetic aperture radar system (SAR) having a range correlator is provided with a hybrid azimuth correlator which utilizes a block-pipe-lined fast Fourier transform (FFT). The correlator has a predetermined FFT transform size with delay elements for delaying SAR range correlated data so as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. The azimuth correlator is comprised of a transversal filter to receive the SAR data in the frequency domain, a generator for range migration compensation and azimuth reference functions, and an azimuth reference multiplier for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT used to restore azimuth correlated data in the frequency domain to the time domain for imaging.

  18. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  19. An analysis of the double-precision floating-point FFT on FPGAs.

    SciTech Connect

    Hemmert, K. Scott; Underwood, Keith Douglas

    2005-01-01

    Advances in FPGA technology have led to dramatic improvements in double precision floating-point performance. Modern FPGAs boast several GigaFLOPs of raw computing power. Unfortunately, this computing power is distributed across 30 floating-point units with over 10 cycles of latency each. The user must find two orders of magnitude more parallelism than is typically exploited in a single microprocessor; thus, it is not clear that the computational power of FPGAs can be exploited across a wide range of algorithms. This paper explores three implementation alternatives for the fast Fourier transform (FFT) on FPGAs. The algorithms are compared in terms of sustained performance and memory requirements for various FFT sizes and FPGA sizes. The results indicate that FPGAs are competitive with microprocessors in terms of performance and that the 'correct' FFT implementation varies based on the size of the transform and the size of the FPGA.

  20. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  1. Mimicking the End Organ Architecture of Slowly Adapting Type I Afferents May Increase the Durability of Artificial Touch Sensors

    PubMed Central

    Lesniak, Daine R.; Gerling, Gregory J.

    2015-01-01

    In effort to mimic the sensitivity and efficient information transfer of natural tactile afferents, recent work has combined force transducers and computational models of mechanosensitive afferents. Sensor durability, another feature important to sensor design, might similarly capitalize upon biological rules. In particular, gains in sensor durability might leverage insight from the compound end organ of the slowly adapting type I afferent, especially its multiple sites of spike initiation that reset each other. This work develops models of compound spiking sensors using a computational network of transduction functions and leaky integrate and fire models (together a spike encoder, the software element of a compound spiking sensor), informed by the output of an existing force transducer (hardware sensing elements of a compound spiking sensor). Individual force transducer failures are simulated with and without resetting between spike encoders to test the importance of both resetting and configuration on system durability. The results indicate that the resetting of adjacent spike encoders, upon the firing of a spike by any one, is an essential mechanism to maintain a stable overall response in the midst of transducer failure. Furthermore, results suggest that when resetting is enabled, the durability of a compound sensor is maximized when individual transducers are paired with spike encoders and multiple, paired units are employed. To explore these ideas more fully, use cases examine the design of a compound sensor to either reach a target lifetime with a set probability or determine how often to schedule maintenance to control the probability of failure. PMID:25705703

  2. Multiscale Study of the Nonlinear Behavior of Heterogeneous Clayey Rocks Based on the FFT Method

    NASA Astrophysics Data System (ADS)

    Jiang, Tao; Xu, Weiya; Shao, Jianfu

    2015-03-01

    A multiscale model based on the fast Fourier transform (FFT) is applied to study the nonlinear mechanical behavior of Callovo-Oxfordian (COx) argillite, a typical heterogeneous clayey rocks. COx argillite is modeled as a three-phase composite with a clay matrix and two types of mineral inclusions. The macroscopic mechanical behavior of argillite samples with different mineralogical compositions are satisfactorily predicted by unified local constitutive models and material parameters. Moreover, the numerical implementation of the FFT-based nonlinear homogenization is easier than direct homogenization, such as the FEM-based homogenization, because it automatically satisfies the periodic boundary condition.

  3. Objective Morphological Quantification of Microscopic Images Using a Fast Fourier Transform (FFT) Analysis

    PubMed Central

    Taylor, Samuel E.; Cao, Tuoxin; Talauliker, Pooja M.; Lifshitz, Jonathan

    2016-01-01

    Quantification of immunohistochemistry (IHC) and immunofluorescence (IF) using image intensity depends on a number of variables. These variables add a subjective complexity in keeping a standard within and between laboratories. Fast Fourier Transformation (FFT) algorithms, however, allow for a rapid and objective quantification (via statistical analysis) using cell morphologies when the microscopic structures are oriented or aligned. Quantification of alignment is given in terms of a ratio of FFT intensity to the intensity of an orthogonal angle, giving a numerical value of the alignment of the microscopic structures. This allows for a more objective analysis than alternative approaches, which rely upon relative intensities. PMID:27134700

  4. Objective Morphological Quantification of Microscopic Images Using a Fast Fourier Transform (FFT) Analysis.

    PubMed

    Taylor, Samuel E; Cao, Tuoxin; Talauliker, Pooja M; Lifshitz, Jonathan

    Quantification of immunohistochemistry (IHC) and immunofluorescence (IF) using image intensity depends on a number of variables. These variables add a subjective complexity in keeping a standard within and between laboratories. Fast Fourier Transformation (FFT) algorithms, however, allow for a rapid and objective quantification (via statistical analysis) using cell morphologies when the microscopic structures are oriented or aligned. Quantification of alignment is given in terms of a ratio of FFT intensity to the intensity of an orthogonal angle, giving a numerical value of the alignment of the microscopic structures. This allows for a more objective analysis than alternative approaches, which rely upon relative intensities.

  5. Simple all-optical FFT scheme enabling Tbit/s real-time signal processing.

    PubMed

    Hillerkuss, D; Winter, M; Teschke, M; Marculescu, A; Li, J; Sigurdsson, G; Worms, K; Ben Ezra, S; Narkiss, N; Freude, W; Leuthold, J

    2010-04-26

    A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.

  6. Architecture of the Flagellar Switch Complex of Escherichia coli: Conformational Plasticity of FliG and Implications for Adaptive Remodeling.

    PubMed

    Kim, Eun A; Panushka, Joseph; Meyer, Trevor; Carlisle, Ryan; Baker, Samantha; Ide, Nicholas; Lynch, Michael; Crane, Brian R; Blair, David F

    2017-03-01

    Structural models of the complex that regulates the direction of flagellar rotation assume either ~34 or ~25 copies of the protein FliG. Support for ~34 came from cross-linking experiments identifying an inter-subunit contact most consistent with that number; support for ~25 came from the observation that flagella can assemble and rotate when FliG is genetically fused to FliF, for which the accepted number is ~25. Here, we have undertaken cross-linking and other experiments to address more fully the question of FliG number. The results indicate a copy number of ~25 for FliG. An interaction between the C-terminal and middle domains, which has been taken to support a model with ~34 copies, is also supported. To reconcile the interaction with a FliG number of ~25, we hypothesize conformational plasticity in an inter-domain segment of FliG that allows some subunits to bridge gaps created by the number mismatch. This proposal is supported by mutant phenotypes and other results indicating that the normally helical segment adopts a more extended conformation in some subunits. The FliG amino-terminal domain is organized in a regular array with dimensions matching a ring in the upper part of the complex. The model predicts that FliG copy number should be tied to that of FliF, whereas FliM copy number can increase or decrease according to the number of FliG subunits that adopt the extended conformation. This has implications for the phenomenon of adaptive switch remodeling, in which FliM the copy number varies to adjust the bias of the switch.

  7. Variation in photosynthetic performance and hydraulic architecture across European beech (Fagus sylvatica L.) populations supports the case for local adaptation to water stress.

    PubMed

    Aranda, Ismael; Cano, Francisco Javier; Gascó, Antonio; Cochard, Hervé; Nardini, Andrea; Mancha, Jose Antonio; López, Rosana; Sánchez-Gómez, David

    2015-01-01

    The aim of this study was to provide new insights into how intraspecific variability in the response of key functional traits to drought dictates the interplay between gas-exchange parameters and the hydraulic architecture of European beech (Fagus sylvatica L.). Considering the relationships between hydraulic and leaf functional traits, we tested whether local adaptation to water stress occurs in this species. To address these objectives, we conducted a glasshouse experiment in which 2-year-old saplings from six beech populations were subjected to different watering treatments. These populations encompassed central and marginal areas of the range, with variation in macro- and microclimatic water availability. The results highlight subtle but significant differences among populations in their functional response to drought. Interpopulation differences in hydraulic traits suggest that vulnerability to cavitation is higher in populations with higher sensitivity to drought. However, there was no clear relationship between variables related to hydraulic efficiency, such as xylem-specific hydraulic conductivity or stomatal conductance, and those that reflect resistance to xylem cavitation (i.e., Ψ(12), the water potential corresponding to a 12% loss of stem hydraulic conductivity). The results suggest that while a trade-off between photosynthetic capacity at the leaf level and hydraulic function of xylem could be established across populations, it functions independently of the compromise between safety and efficiency of the hydraulic system with regard to water use at the interpopulation level.

  8. The Fun30 chromatin remodeler Fft3 controls nuclear organization and chromatin structure of insulators and subtelomeres in fission yeast.

    PubMed

    Steglich, Babett; Strålfors, Annelie; Khorosjutina, Olga; Persson, Jenna; Smialowska, Agata; Javerzat, Jean-Paul; Ekwall, Karl

    2015-03-01

    In eukaryotic cells, local chromatin structure and chromatin organization in the nucleus both influence transcriptional regulation. At the local level, the Fun30 chromatin remodeler Fft3 is essential for maintaining proper chromatin structure at centromeres and subtelomeres in fission yeast. Using genome-wide mapping and live cell imaging, we show that this role is linked to controlling nuclear organization of its targets. In fft3∆ cells, subtelomeres lose their association with the LEM domain protein Man1 at the nuclear periphery and move to the interior of the nucleus. Furthermore, genes in these domains are upregulated and active chromatin marks increase. Fft3 is also enriched at retrotransposon-derived long terminal repeat (LTR) elements and at tRNA genes. In cells lacking Fft3, these sites lose their peripheral positioning and show reduced nucleosome occupancy. We propose that Fft3 has a global role in mediating association between specific chromatin domains and the nuclear envelope.

  9. A FFT-based formulation for efficient mechanical fields computation in isotropic and anisotropic periodic discrete dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.

    2015-09-01

    In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.

  10. [The comparison of the extraction of beta wave from EEG between FFT and wavelet transform].

    PubMed

    Wang, Haowen; Qian, Zhiyu; Li, Hongjing; Chen, Chunxiao; Ding, Shangwen

    2013-08-01

    In order to choose a fast and efficient real-time method in beta wave information extraction, we compared the result and the efficiency of the information separation of both fast Fourier transform (FFT) and wavelet transform of EEG beta band in the present paper. Our work provides the basis for the EEG data come from the real-time health assessment of 3DTV. We took the EEGs of 5 healthy volunteers before, after and during the process of watching 3DTV and meanwhile recorded the results. The trends of the relative energy and the time cost of two methods were compared by using both the FFT and wavelet packet transform (WPT) which was to extract the feature of EEG beta wave. It demonstrated that (1) Results of the two methods were consistent in the trends of watching 3DTV; (2) Results of the differences in two methods were consistent before and after watching 3DTV; (3) FFT took less time than the wavelet transform in the same case. It is concluded that the results of both FFT and Wavelet transform are consistent in feature extraction of EEG, and a fast method to work with the large quantities of EEG data obtained in the experiments can be offered in the future.

  11. PIPER: an FFT-based protein docking program with pairwise potentials.

    PubMed

    Kozakov, Dima; Brenke, Ryan; Comeau, Stephen R; Vajda, Sandor

    2006-11-01

    The Fast Fourier Transform (FFT) correlation approach to protein-protein docking can evaluate the energies of billions of docked conformations on a grid if the energy is described in the form of a correlation function. Here, this restriction is removed, and the approach is efficiently used with pairwise interaction potentials that substantially improve the docking results. The basic idea is approximating the interaction matrix by its eigenvectors corresponding to the few dominant eigenvalues, resulting in an energy expression written as the sum of a few correlation functions, and solving the problem by repeated FFT calculations. In addition to describing how the method is implemented, we present a novel class of structure-based pairwise intermolecular potentials. The DARS (Decoys As the Reference State) potentials are extracted from structures of protein-protein complexes and use large sets of docked conformations as decoys to derive atom pair distributions in the reference state. The current version of the DARS potential works well for enzyme-inhibitor complexes. With the new FFT-based program, DARS provides much better docking results than the earlier approaches, in many cases generating 50% more near-native docked conformations. Although the potential is far from optimal for antibody-antigen pairs, the results are still slightly better than those given by an earlier FFT method. The docking program PIPER is freely available for noncommercial applications.

  12. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  13. Can architecture be barbaric?

    PubMed

    Hürol, Yonca

    2009-06-01

    The title of this article is adapted from Theodor W. Adorno's famous dictum: 'To write poetry after Auschwitz is barbaric.' After the catastrophic earthquake in Kocaeli, Turkey on the 17th of August 1999, in which more than 40,000 people died or were lost, Necdet Teymur, who was then the dean of the Faculty of Architecture of the Middle East Technical University, referred to Adorno in one of his 'earthquake poems' and asked: 'Is architecture possible after 17th of August?' The main objective of this article is to interpret Teymur's question in respect of its connection to Adorno's philosophy with a view to make a contribution to the politics and ethics of architecture in Turkey. Teymur's question helps in providing a new interpretation of a critical approach to architecture and architectural technology through Adorno's philosophy. The paper also presents a discussion of Adorno's dictum, which serves for a better understanding of its universality/particularity.

  14. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  15. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  16. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  17. Calculation of Cancellous Bone Elastic Properties with the Polarization-based FFT Iterative Scheme.

    PubMed

    Colabella, Lucas; Ibarra Pino, Ariel Alejandro; Ballarre, Josefina; Kowalczyk, Piotr; Cisilino, Adrián Pablo

    2017-03-07

    The FFT based method, originally introduced by Moulinec and Suquet in 1994 has gained popularity for computing homogenized properties of composites. In this work, the method is used for the computational homogenization of the elastic properties of cancellous bone. To the authors' knowledge, this is the first study where the FFT scheme is applied to bone mechanics. The performance of the method is analyzed for artificial and natural bone samples of two species: bovine femoral heads and implanted femurs of Hokkaido rats. Model geometries are constructed using data from X-ray tomographies and the bone tissue elastic properties are measured using micro and nanoindentation tests. Computed results are in excellent agreement with those available in the literature. The study shows the suitability of the method to accurately estimate the fully anisotropic elastic response of cancellous bone. Guidelines are provided for the construction of the models and the setting of the algorithm.

  18. Perspectives in magnetic resonance: NMR in the post-FFT era.

    PubMed

    Hyberts, Sven G; Arthanari, Haribabu; Robson, Scott A; Wagner, Gerhard

    2014-04-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-uniform sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR.

  19. Application of FFT-analyzed umbilical artery doppler signals to fuzzy algorithm.

    PubMed

    Hardalaç, Fýrat; Biri, Aydan; Sucak, Ayhan

    2004-12-01

    Doppler signals, recorded from the umbilical artery of 60 women with pregnancy, were transferred to personal computer via a 16-bit sound card. The fast Fourier transform (FFT) method was applied to the recorded signal from each patient. Because FFT method inherently cannot offer a good spectral resolution at highly turbulent blood flows, it sometimes causes wrong interpretation of Doppler signals. In order to avoid this problem, umbilical artery Doppler blood flow velocity parameters were introduced to a fuzzy algorithm. It is observed that the fuzzy algorithm gives true results for interpretation of umbilical artery blood flow velocity parameters. Forty-five blood flow velocity parameters of 60 women with pregnancy and 15 parameters in training data have been used in a fuzzy system as testing data. The overall success ratio in training data and the testing data were 95.55 and 93.35% respectively.

  20. Numerical evaluation of the Rayleigh integral for planar radiators using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.; Maynard, J. D.

    1982-01-01

    Rayleigh's integral formula is evaluated numerically for planar radiators of any shape, with any specified velocity in the source plane using the fast Fourier transfrom algorithm. The major advantage of this technique is its speed of computation - over 400 times faster than a straightforward two-dimensional numerical integration. The technique is developed for computation of the radiated pressure in the nearfield of the source and can be easily extended to provide, with little computation time, the vector intensity in the nearfield. Computations with the FFT of the nearfield pressure of baffled rectangular plates with clamped and free boundaries are compared with the 'exact' solution to illuminate any errors. The bias errors, introduced by the FFT, are investigated and a technique is developed to significantly reduce them.

  1. Fitting FFT-derived spectra: Theory, tool, and application to solar radio spike decomposition

    SciTech Connect

    Nita, Gelu M.; Fleishman, Gregory D.; Gary, Dale E.; Marin, William; Boone, Kristine

    2014-07-10

    Spectra derived from fast Fourier transform (FFT) analysis of time-domain data intrinsically contain statistical fluctuations whose distribution depends on the number of accumulated spectra contributing to a measurement. The tail of this distribution, which is essential for separating the true signal from the statistical fluctuations, deviates noticeably from the normal distribution for a finite number of accumulations. In this paper, we develop a theory to properly account for the statistical fluctuations when fitting a model to a given accumulated spectrum. The method is implemented in software for the purpose of automatically fitting a large body of such FFT-derived spectra. We apply this tool to analyze a portion of a dense cluster of spikes recorded by our FASR Subsystem Testbed instrument during a record-breaking event that occurred on 2006 December 6. The outcome of this analysis is briefly discussed.

  2. HexServer: an FFT-based protein docking server powered by graphics processors.

    PubMed

    Macindoe, Gary; Mavridis, Lazaros; Venkatraman, Vishwesh; Devignes, Marie-Dominique; Ritchie, David W

    2010-07-01

    HexServer (http://hexserver.loria.fr/) is the first Fourier transform (FFT)-based protein docking server to be powered by graphics processors. Using two graphics processors simultaneously, a typical 6D docking run takes approximately 15 s, which is up to two orders of magnitude faster than conventional FFT-based docking approaches using comparable resolution and scoring functions. The server requires two protein structures in PDB format to be uploaded, and it produces a ranked list of up to 1000 docking predictions. Knowledge of one or both protein binding sites may be used to focus and shorten the calculation when such information is available. The first 20 predictions may be accessed individually, and a single file of all predicted orientations may be downloaded as a compressed multi-model PDB file. The server is publicly available and does not require any registration or identification by the user.

  3. Billiard simulation and FFT analysis of AAS oscillations in nanofabricated InGaAs

    NASA Astrophysics Data System (ADS)

    Koga, Takaaki; Faniel, Sebastien; Mineshige, Shunsuke; Matsuura, Toru; Sekine, Yoshiaki

    2010-03-01

    Gate-voltage-dependent amplitude of magneto-conductance oscillation was analyzed using FFT method. The obtained FFT spectrum was compared with the areal dependence of the occurrence and spin interferece amplitude, calculated for Altshuler-Aronov-Spivak (AAS) type time-reversal pairs of the interference paths on all possible classical trajectroies that were obtained by extensive billiard simulations within the given structures. We have calcuated generic spin interference (SI) curves as a function of the Rashba parameter α, for various values of the Dresselhaus parameter b41^6c6c [eVå^3]. The comparison between theory and experiment suggested that the value of b41^6c6c should be considerably reduced from 27 eVå^3, the generally known value from the k.p theory.

  4. Perspectives in Magnetic Resonance: NMR in the Post-FFT Era

    PubMed Central

    Hyberts, Sven G.; Arthanari, Haribabu; Robson, Scott A.; Wagner, Gerhard

    2014-01-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-unifom sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR. PMID:24656081

  5. Optical FFT/IFFT circuit realization using arrayed waveguide gratings and the applications in all-optical OFDM system.

    PubMed

    Wang, Zhenxing; Kravtsov, Konstantin S; Huang, Yue-Kai; Prucnal, Paul R

    2011-02-28

    Arrayed waveguide gratings (AWG) are widely used as wavelength division multiplexers (MUX) and demultiplexers (DEMUX) in optical networks. Here we propose and demonstrate that conventional AWGs can also be used as integrated spectral filters to realize a Fast Fourier transform (FFT) and its inverse form (IFFT). More specifically, we point out that the wavelength selection conditions of AWGs when used as wavelength MUX/DEMUX also enable them to perform FFT/IFFT functions. Therefore, previous research on AWGs can now be applied to optical FFT/IFFT circuit design. Compared with other FFT/IFFT optical circuits, AWGs have less structural complexity, especially for a large number of inputs and outputs. As an important application, AWGs can be used in optical OFDM systems. We propose an all-optical OFDM system with AWGs and demonstrate the simulation results. Overall, the AWG provides a feasible solution for all-optical OFDM systems, especially with a large number of optical subcarriers.

  6. Use of the Reduced Precision Redundancy (RPR) Method in a Radix-4 FFT Implementation

    DTIC Science & Technology

    2010-09-01

    PRECISION REDUNDANCY (RPR) METHOD IN A RADIX-4 FFT IMPLEMENTATION by Athanasios Gavros September 2010 Thesis Co-Advisors: Herschel ...SCHOOL September 2010 Author: Athanasios Gavros Approved by: Herschel H. Loomis, Jr. Thesis Co-Advisor Alan A. Ross Thesis Co...to Professors Herschel H. Loomis, Jr. and Alan A. Ross for their guidance, encouragement and support in the completion of this work. xvi THIS PAGE

  7. Partial FFT Demodulation: A Detection Method for Doppler Distorted OFDM Systems

    DTIC Science & Technology

    2010-06-01

    receiver, thus indicating that an order of mag- nitude reduction in BER is achievable at a minimal increase in com- plexity. 1 . INTRODUCTION AND PROBLEM...processing. Pre-FFT linear pro- cessing schemes such as time-domain windowing to shorten the ICI are considered in [ 1 ], while time-varying filters...been proposed for wireless This research has been funded in part by the following grants and orga- nizations: ONR MURI grant N00014-07- 1 -0738, ONR

  8. A test of a modified algorithm for computing spherical harmonic coefficients using an FFT

    NASA Technical Reports Server (NTRS)

    Elowitz, Mark; Hill, Frank; Duvall, Thomas L., Jr.

    1989-01-01

    The Dilts (1985) algorithm for computing the spherical harmonic expansion coefficients for a function on a sphere, on the basis of a two-dimensional FFT, is presently modified, tested, and found to eliminate problems of overflow and large storage requirements associated with the encounter of harmonic degree values greater than 16. Results from timing tests show the Dilts program to be impractical, however, for the computation of spherical harmonic expansion coefficients for large harmonic degree values.

  9. STS-40 crewmembers participate in egress training at JSC's MAIL Bldg 9A FFT

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-40 Commander Bryan D. O'Connor (right) looks on as Pilot Sidney M. Gutierrez adjusts his launch and entry suit (LES) parachute harness. The two crewmembers, standing in front of the full fuselage trainer (FFT) side hatch, are preparing for emergency egress exercises in JSC's Mockup and Integration Laboratory (MAIL) Bldg 9A. In the background the crew escape system (CES) pole extends through the side hatch.

  10. STS-40 crew waits for emergency egress training to begin at JSC's MAIL FFT

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-40 crewmembers listen to instructions prior to emergency egress exercises at JSC's Mockup and Integration Laboratory (MAIL) Bldg 9A. Wearing launch and entry suits (LESs) are (left to right) Mission Specialist (MS) James P. Bagian, MS Tamara E. Jernigan, Pilot Sidney M. Gutierrez, and Commander Bryan D. O'Connor. The crew will practice egress through the side hatch of the Full Fuselage Trainer (FFT) via the crew escape system (CES) pole.

  11. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji

    2016-03-01

    Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer.

  12. The Sharper Image: Implementing a Fast Fourier Transform (FFT) to Enhance a Video-Captured Image.

    DTIC Science & Technology

    1994-01-01

    mathematical system to quantitatively analyze and compare complex wave forms. In 1307, Baron Jean - Baptiste - Joseph Fourier proved that any periodic wave can be...HOVEY ROAD, PENSACOLA, FL 32508-1046 NAMRL Special Report 94-1 THE SHARPER IMAGE: 16 IMPLEMENTING A FAST FOURIER TRANSFORM (FFT) TO ENHANCE A VIDEO...most visually impaired persons fail to discern the higher spatial frequencies present in an image. Based on the Fourier analysis of vision, Peli et al

  13. Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.

    PubMed

    Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio

    2014-07-05

    A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems.

  14. Realization of optical OFDM using time lenses and its comparison with optical OFDM using FFT.

    PubMed

    Yang, Dong; Kumar, Shiva

    2009-09-28

    An optical orthogonal frequency division multiplexing (OFDM) scheme with Fourier transform in optical domain using time lenses both at the transmitter and at the receiver is analyzed. The comparison of performance between this scheme with the optical OFDM scheme that utilizes fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT) in electrical domain is made. The nonlinear effects induced by Mach-Zehnder modulator (MZM) as well as by the fiber are investigated for both schemes. Results show that the coherent OFDM using time lenses has almost the same performance as that using FFT when the electrical driving message signal voltages are low so that MZM operates in the linear region. The nonlinearity of MZM deteriorates the conventional coherent OFDM based on FFT when the power of electrical driving signal increases significantly, but only has negligible impairment on the coherent OFDM using time lenses. Details of the time lens set up are provided and a novel scheme to implement the time lens without requiring the quadratic dependence of the driving voltage is presented.

  15. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  16. A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm

    DTIC Science & Technology

    1991-07-01

    MUSIC ALGORITHM (U) by L.E. Montbrland go I July 1991 CRC REPORT NO. 1438 Ottawa I* Government of Canada Gouvsrnweient du Canada I o DParunnt of...FINDING RESULTS FROM AN FFT PEAK IDENTIFICATION TECHNIQUE WITH THOSE FROM THE MUSIC ALGORITHM (U) by L.E. Montbhrand CRC REPORT NO. 1438 July 1991...Ottawa A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm L.E. Montbriand Abstract A

  17. Application to induction motor faults diagnosis of the amplitude recovery method combined with FFT

    NASA Astrophysics Data System (ADS)

    Liu, Yukun; Guo, Liwei; Wang, Qixiang; An, Guoqing; Guo, Ming; Lian, Hao

    2010-11-01

    This paper presents a signal processing method - amplitude recovery method (abbreviated to ARM) - that can be used as the signal pre-processing for fast Fourier transform (FFT) in order to analyze the spectrum of the other-order harmonics rather than the fundamental frequency in stator currents and diagnose subtle faults in induction motors. In this situation, the ARM functions as a filter that can filter out the component of the fundamental frequency from three phases of stator currents of the induction motor. The filtering result of the ARM can be provided to FFT to do further spectrum analysis. In this way, the amplitudes of other-order frequencies can be extracted and analyzed independently. If the FFT is used without the ARM pre-processing and the components of other-order frequencies, compared to the fundamental frequency, are fainter, the amplitudes of other-order frequencies are not able easily to extract out from stator currents. The reason is when the FFT is used direct to analyze the original signal, all the frequencies in the spectrum analysis of original stator current signal have the same weight. The ARM is capable of separating the other-order part in stator currents from the fundamental-order part. Compared to the existent digital filters, the ARM has the benefits, including its stop-band narrow enough just to stop the fundamental frequency, its simple operations of algebra and trigonometry without any integration, and its deduction direct from mathematics equations without any artificial adjustment. The ARM can be also used by itself as a coarse-grained diagnosis of faults in induction motors when they are working. These features can be applied to monitor and diagnose the subtle faults in induction motors to guard them from some damages when they are in operation. The diagnosis application of ARM combined with FFT is also displayed in this paper with the experimented induction motor. The test results verify the rationality and feasibility of the

  18. High scalable implementation of SPME using parallel spherical cutoff three-dimensional FFT on the six-dimensional torus QCDOC supercomputer

    NASA Astrophysics Data System (ADS)

    Fang, Bin

    In order to model complex heterogeneous biophysical systems with non-trivial charge distributions such as globular proteins in water, it is important to evaluate the long range forces present in these systems accurately and efficiently. The Smooth Particle Mesh Ewald summation technique (SPME) is commonly employed to determine the long range part of electrostatic energy in large scale molecular simulations. While the SPME technique does not give rise to a performance bottleneck in a single processor or scalar computation, current implementations of SPME on massively parallel supercomputers become problematic at large processor numbers, limiting the time and length scales that can be reached. Here, two accomplishments have been made in this dissertation to give rise to both improved accuracy and efficiency on massively parallel computing platforms. First of all, a well designed parallel framework of 3D complex-to-complex FFT and 3D real-to-complex FFT for the novel QCDOC supercomputer with its 6D-torus architecture is given. The efficiency of this framework was tested on up to 4096 processors. Secondly, a new modification of the SPME technique is exploited, which was inspired by the non-linear growth of the approximation error of Euler Exponential Spline interpolation function. This fine grained parallel implementation of SPME has been embedded into MDoC package. Numerical tests of package performance on up to 1024-processor QCDOC supercomputer residing at Brookhaven National Lab are presented for two systems of interest, beta-hairpin solvated in explicit water, a system which consists of 1112 water molecules and a 20 residue protein for a total of 3579 atoms, and HIV-1 protease solvated in explicit water, a system which consists of 8793 water molecules and a 198 residue protein for a total of 29508 atoms.

  19. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  20. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  1. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  2. AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS.

    PubMed

    Kurzak, Jakub; Mirkovic, Dragan; Pettitt, B Montgomery; Johnsson, S Lennart

    2008-01-01

    The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines.

  3. Analysis of fixed point FFT for Fourier domain optical coherence tomography systems.

    PubMed

    Ali, Murtaza; Parlapalli, Renuka; Magee, David P; Dasgupta, Udayan

    2009-01-01

    Optical coherence tomography (OCT) is a new imaging modality gaining popularity in the medical community. Its application includes ophthalmology, gastroenterology, dermatology etc. As the use of OCT increases, the need for portable, low power devices also increases. Digital signal processors (DSP) are well suited to meet the signal processing requirements of such a system. These processors usually operate on fixed precision. This paper analyzes the issues that a system implementer faces implementing signal processing algorithms on fixed point processor. Specifically, we show the effect of different fixed point precisions in the implementation of FFT on the sensitivity of Fourier domain OCT systems.

  4. Accurate compensation of the low-frequency components for the FFT-based turbulent phase screen.

    PubMed

    Xiang, Jingsong

    2012-01-02

    Standard FFT-based turbulent phase screen generation method has very large errors due to the undersampling of the low frequency components. Subharmonic methods are the main low frequency components compensating methods to improve the accuracy, but the residual errors are still large. In this paper I propose a new low frequency components compensating method, which is based on the correlation matrix phase screen generation methods. Using this method, the low frequency components can be compensated accurately, both of the accuracy and speed are superior to those of the subharmonic methods.

  5. Improving the Performance and Resource Utilization of the CASPER FFT and Polyphase Filterbank

    NASA Astrophysics Data System (ADS)

    Monroe, Ryan

    The effectiveness of Digital Signal Processing (DSP) solutions for radio-astronomy is limited by the efficiency of the implemented algorithms. Novel implementations of several popular DSP algorithms are presented. Their optimization strategies are discussed and their efficiency is compared to that of the standard Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) library solutions. Compared to CASPER, the PFB-FIR and FFT modules require 73% and 45% of the DSP48E1 resources, with performance dominated by ADC quantization noise for typical radio-astronomy inputs.

  6. STS-46 Payload Specialist Malerba on the middeck of JSC's FFT mockup

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-46 Atlantis, Orbiter Vehicle (OV) 104, Italian Payload Specialist Franco Malerba with his hand resting on the crew escape system (CES) pole stands on the middeck of the Full Fuselage Trainer (FFT) located in JSC's Mockup and Integration Laboratory (MAIL) Bldg 9. Malerba, wearing a flight suit, familiarizes himself with the operation of the CES pole which extends out the shuttle mockup's open side hatch. The CES pole is used if emergency egress is required during the launch or ascent phase of flight.

  7. Precision geoid determination by spherical FFT in and around the Korean peninsula

    NASA Astrophysics Data System (ADS)

    Yun, H.-S.

    1999-01-01

    This paper deals with the precision geoid determination by a gravimetric solution in and around the Korean peninsula. A number of data files were compiled for this work, containing now more than 69,900 point gravity data on land and ocean areas. The EGM96 global geopotential model to degree 360 was used in order to determine the long wavelength effect of the geoid surface. By applying the remove-restore technique the geoid undulations were determined by combining a geopotential model, mean free-air gravity anomalies and height in a Digital Elevation Model (DEM)null. Computation involves a spherical approximation to conduct the Stokes' integration by a two dimensional spherical Fast Fourier Transform (FFT) with 100% zero-padding. A terrain correction was also computed by FFT with a spherical approximation of the Residual Terrain Model (RTM) terrain correction integration. Accuracy estimates are given for absolute geoid undulations using 78 GPS/Leveling stations. The comparative evaluation gives the bias of 0.187 meters and standard deviation of 0.28 meters, respectively. The relative accuracy achieved was of the order of 3.1 ppm for baselines between 10 and 350 kilometers.

  8. Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.

    PubMed

    Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju

    2016-01-01

    Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method.

  9. Robust FFT-based scale-invariant image registration with image gradients.

    PubMed

    Tzimiropoulos, Georgios; Argyriou, Vasileios; Zafeiriou, Stefanos; Stathaki, Tania

    2010-10-01

    We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6.

  10. Enhancing the performance of BOTDR based on the combination of FFT technique and complementary coding.

    PubMed

    Wang, Feng; Zhu, Chenghao; Cao, Chunqi; Zhang, Xuping

    2017-02-20

    We implement a BOTDR sensor that combines the complementary coding with the fast Fourier transform (FFT) technique for high-performance distributed sensing. The employment of the complementary coding provides an enhanced signal-to-noise ratio of the sensing system, which leads to high accuracy measurement. Meanwhile, FFT technique in BOTDR is combined to reduce the measurement time sharply compared to the classical frequency sweeping technique. In addition, a pre-depletion two-wavelength probe pulse is proposed to suppress the distortion of the coding probe pulse induced by EDFA. Experiments are carried out beyond 10 km single-mode fiber, and the results show the capabilities of the proposed scheme to achieve 2 m spatial resolution with 0.37 MHz frequency uncertainty which corresponds to ∼0.37 °C temperature resolution or ∼7.4 με strain resolution. The measurement time can be more than tens of times faster than traditional frequency sweeping method in theory.

  11. FFT averaging of multichannel BCG signals from bed mattress sensor to improve estimation of heart beat interval.

    PubMed

    Kortelainen, Juha M; Virkkala, Jussi

    2007-01-01

    A multichannel pressure sensing Emfit foil was integrated to a bed mattress for measuring ballistocardiograph signals during sleep. We calculated the heart beat interval with cepstrum method, by applying FFT for short time windows including pair of consequent heart beats. We decreased the variance of FFT by averaging the multichannel data in the frequency domain. Relative error of our method in reference to electrocardiograph RR interval was only 0.35% for 15 night recordings with six normal subjects, when 12% of data was automatically removed due to movement artifacts. Background motivation for this work is given from the studies applying heart rate variability for the sleep staging.

  12. Improving situation awareness using a hub architecture for friendly force tracking

    NASA Astrophysics Data System (ADS)

    Karkkainen, Anssi P.

    2010-04-01

    Situation Awareness (SA) is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their future status. In a military environment the most critical elements to be tracked are followed elements are either friendly or hostile forces. Poor knowledge of locations of friendly forces easily leads into the situation in which the troops could be under firing by own troops or in which decisions in a command and control system are based on incorrect tracking. Thus the Friendly Force Tracking (FFT) is a vital part of building situation awareness. FFT is basically quite simple in theory; collected tracks are shared through the networks to all troops. In real world, the situation is not so clear. Poor communication capabilities, lack of continuous connectivity n and large number of user on different level provide high requirements for FFT systems. In this paper a simple architecture for Friendly Force Tracking is presented. The architecture is based on NFFI (NATO Friendly Force Information) hubs which have two key features; an ability to forward tracking information and an ability to convert information into the desired format. The hub based approach provides a lightweight and scalable solution, which is able to use several types of communication media (GSM, tactical radios, TETRA etc.). The system is also simple to configure and maintain. One main benefit of the proposed architecture is that it is independent on a message format. It communicates using NFFI messages, but national formats are also allowed.

  13. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  14. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Fujisawa, A.; Nagashima, Y.; Hamasaki, M.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; QUEST Team

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis.

  15. FFT analysis on NDVI annual cycle and climatic regionality in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Negrón Juárez, Robinson I.; Liu, William T.

    2001-11-01

    By considering that the climate of Northeast Brazil (NEB) has distinct wet and dry seasons, the mixed radix fast Fourier transform (mrFFT) algorithm, developed at the National Aerospace Centre of the Netherlands, was applied to a monthly Normalized Difference Vegetation Index (NDVI) time series from July 1981 to June 1993, to generate phase, amplitude and mean NDVI data using a 1-year frequency in order to improve the analysis of its spatial variation.The NDVI mean values varied from >0.7, which occurred in northwest and southeast regions, to <0.3 in the northeast and 0.4 in the southwest regions of the NEB. The 90° phase month at its maximum amplitude occurred in August and was observed in both southeastern and northwestern coasts, located at 10.5°S-37.5°W and 4°S-46°W, respectively. It changed rapidly from August, June to May, moved inland and changed gradually from May through to April and from March to February, then moved towards the centre Dry Polygon area. Then it changed gradually from February to January and ended up in December, and moved further southwards. The annual cycle amplitude varied from <0.075 in northwest and southeast regions to >0.25 in the northeast region. By using spatial variations of phase, amplitude and mean NDVI values, 15 climate types were delineated for the NEB.The spatial distribution of climate types in the NEB delineated by the NDVI FFT analysis agreed mostly with the climatic types presented by Hargreaves (Precipitation dependability and potentials for agricultural production in Northeast Brazil. Empresa Brasileira de Pesquisa Agropecuária-EMBRAPA, Brazil, 1974), except regions with higher spatial variability and limited surface meteorological data. Among the three components: phase, amplitude and mean NDVI, the phase image, informing the initiation and duration of rainy season, was the most important component for climate-type delineation. Nevertheless, while the extreme values of amplitude, inferring a high wet

  16. Experimental Architecture.

    ERIC Educational Resources Information Center

    Alter, Kevin

    2003-01-01

    Describes the design of the Centre for Architectural Structures and Technology at the University of Manitoba, including the educational context and design goals. Includes building plans and photographs. (EV)

  17. Analog implementation of radix-2, 16-FFT processor for OFDM receivers: non-linearity behaviours and system performance analysis

    NASA Astrophysics Data System (ADS)

    Mokhtarian, N.; Hodtani, G. A.

    2015-12-01

    Analog implementations of decoders have been widely studied by considering circuit complexity, as well as power and speed, and their integration with other analog blocks is an extension of analog decoding research. In the front-end blocks of orthogonal frequency-division multiplexing (OFDM) systems, combination of an analog fast Fourier transform (FFT) with an analog decoder is suitable. In this article, the implementation of a 16-symbol FFT processor based on analog complementary metal-oxide-semiconductor current mirrors within circuit and system levels is presented, and the FFT is implemented using a butterfly diagram, where each node is implemented using analog circuits. Implementation details include consideration of effects of transistor mismatch and inherent noises and effects of circuit non-linearity in OFDM system performance. It is shown that not only can transistor inherent noises be measured but also transistor mismatch can be applied as an input-referred noise source that can be used in system- and circuit-level studies. Simulations of a radix-2, 16-symbol FFT show that proposed circuits consume very low power, and impacts of noise, mismatch and non-linearity for each node of this processor are very small.

  18. Mixed boundary conditions for FFT-based homogenization at finite strains

    NASA Astrophysics Data System (ADS)

    Kabel, Matthias; Fliegener, Sascha; Schneider, Matti

    2016-02-01

    In this article we introduce a Lippmann-Schwinger formulation for the unit cell problem of periodic homogenization of elasticity at finite strains incorporating arbitrary mixed boundary conditions. Such problems occur frequently, for instance when validating computational results with tensile tests, where the deformation gradient in loading direction is fixed, as is the stress in the corresponding orthogonal plane. Previous Lippmann-Schwinger formulations involving mixed boundary can only describe tensile tests where the vector of applied force is proportional to a coordinate direction. Utilizing suitable orthogonal projectors we develop a Lippmann-Schwinger framework for arbitrary mixed boundary conditions. The resulting fixed point and Newton-Krylov algorithms preserve the positive characteristics of existing FFT-algorithms. We demonstrate the power of the proposed methods with a series of numerical examples, including continuous fiber reinforced laminates and a complex nonwoven structure of a long fiber reinforced thermoplastic, resulting in a speed-up of some computations by a factor of 1000.

  19. Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data

    DTIC Science & Technology

    2009-05-01

    September 2007. DRDC Suffield TR 2009-040 i Résumé Le schéma probabiliste inférentiel bayésien est une méthode naturelle et logiquement con- stante de...conduite au moyens d’essais pratiques en 2007 (FFT-07) par les réseaux d’observation de capteurs d’information FUSION (FUsing Sensor Information from... capteurs CBR ait ef- fectué la détection d’un tel événement. Il s’agit d’un problème de reconstruction des sources (appelé aussi dans certaines

  20. Analysis of p-Si macropore etching using FFT-impedance spectroscopy.

    PubMed

    Ossei-Wusu, Emmanuel; Carstensen, Jürgen; Föll, Helmut

    2012-06-20

    The dependence of the etch mechanism of lithographically seeded macropores in low-doped p-type silicon on water and hydrofluoric acid (HF) concentrations has been investigated. Using different HF concentrations (prepared from 48 and 73 wt.% HF) in organic electrolytes, the pore morphologies of etched samples have been related to in situ impedance spectra (IS) obtained by Fast Fourier Transform (FFT) technique. It will be shown that most of the data can be fitted with a simple equivalent circuit model. The model predicts that the HF concentration is responsible for the net silicon dissolution rate, while the dissolution rate selectivity at the pore tips and walls that ultimately enables pore etching depends on the water content. The 'quality' of the pores increases with decreasing water content in HF/organic electrolytes.

  1. Investigation of the electroreduction of silver sulfite complexes by means of electrochemical FFT impedance spectroscopy.

    PubMed

    Valiūniene, A; Baltrūnas, G; Valiūnas, R; Popkirov, G

    2010-08-15

    The electroreduction kinetics of silver sulfite complexes was investigated by electrochemical fast Fourier transform (FFT) impedance spectroscopy (0.061-1500 Hz). The time dependences of the real and imaginary components of impedance were determined in a solution containing 0.05 M Ag (I) and 1M Na(2)SO(3). The mean duration of silver ad-atom diffusion on the surface to the nearest crystallization centre was calculated: during the first 210 s of contact with the electrolyte, these values increase from 0.66 up to 1.77 s; thereafter, this variation stabilizes and the mean duration of silver ad-atom diffusion reaches an almost constant value (1.56 s).

  2. [Discussion about the prediction accuracy for dynamic spectrum by partial FFT].

    PubMed

    Li, Gang; Li, Qiu-xia; Lin, Ling; Li, Xiao-xia; Wang, Yan; Liu, Yu-liang

    2006-12-01

    The development of near-infrared-based techniques for the noninvasive determination of blood component concentrations has attracted significant interest in recent years. But the noninvasive measurement of blood compositions has not yet been applied to the clinical field except blood oxygen saturation. The most important and also difficult problems are the effects of individual discrepancy and complicated measurement conditions. In the present article, the approach of dynamic spectrum (DS) is introduced, which is based on the principle of photoplethysmography. It is very difficult too to pick up DS with high precision in time domain. In order to extract the DS with high accuracy, the FFT method and it's leakage are discussed. The influences of sampling speed, sampling signal periodicity, window function and unsynchronized sample are analyzed by emulating experiments. The result of emulating experiments shows that choosing certain sampling speed, sampling signal periodicity, window function and interpolation arithmetic will improve accuracy observably. This provides necessary condition for the clinical application of DS.

  3. [Research on multifunctional fitness monitor based on FFT and photoelectric sensor].

    PubMed

    Tian, He; Zhu, Huanyan; Zhang, Yu; Zhang, Xu

    2013-01-01

    This paper proposes a multifunctional fitness monitor based on FFT and photoelectric sensor, which uses pulse-type and non-invasive detection method to complete the analysis of the human blood oxygen saturation and heart rate. The system collects the absorption of red and infrared light absorbed by fingertip, then by programmable gain amplifier and the Fast Fourier analysis, it extracts the amplitude, frequency of the AC signal. PIC24FJ128GA010 is used to complete the collection, automatic gain judgment and signal processing. Finally, the result is calibrated by pulse blood oxygen emulator. Furthermore, it realizes the pedometer function based on three axles acceleration sensors MMA7260, which enhances fitness monitor's usability and allows people to obtain dynamic physiological signs when exercising.

  4. STS-26 Commander Hauck during egress training in JSC's MAIL Bldg 9A FFT

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, Commander Frederick H. Hauck, wearing launch and entry suit (LES) and launch and entry helmet (LEH), egresses the Full Fuselage Trainer (FFT) via the new crew escape system (CES) slide inflated at the open side hatch. Technicians stand on either side of the slide ready to help Hauck to his feet when he reaches the bottom. The emergency egress training was held in JSC's Shuttle Mockup and Integration Laboratory (MAIL) Bldg 9A. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (LESs) and checked out the crew escape system (CES) slide and other CES configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  5. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Astronaut Steven R. Nagel (left), positioned in the open side hatch, briefs the crew on the pole escape system as he demonstrates some related equipment. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  6. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Looking on at right are Astronaut Office Chief Daniel C. Brandenstein (standing) and astronaut James P. Bagian. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options.

  7. STS-29 MS Bagian during post landing egress exercises in JSC FFT mockup

    NASA Technical Reports Server (NTRS)

    1989-01-01

    STS-29 Discovery, Orbiter Vehicle (OV) 103, Mission Specialist (MS) James P. Bagian works his way down to 'safety' using a sky-genie device during post landing emergency egress exercises in JSC full fuselage trainer (FFT) located in the Mockup and Integration Laboratory Bldg 9A. Bagian, wearing orange launch and entry suit (LES) and launch and entry helmet (LEH), lowers himself using the sky genie after egressing from crew compartment overhead window W8. Fellow crewmembers and technicians watch Bagian's progress. Standing in navy blue LES is MS Robert C. Springer with MS James F. Buchli seated behind him on his right and Pilot John E. Blaha seated behind him on his left. Bagian is one of several astronauts who has been instrumental in developing the new crew escape system (CES) equipment (including parachute harness).

  8. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  9. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms.

  10. FFT and Wavelet analysis for the study of gravity wave activity over a modeled hurricane environment

    NASA Astrophysics Data System (ADS)

    Kuester, M. A.; Alexander, J.; Ray, E.

    2005-12-01

    Understanding of gravity waves and their sources are important for driving global circulations in climate and weather forecasting models. Temperature fluctuations associated with gravity waves near the tropopause also affect cirrus cloud formation, which is important to the study of radiative forcing in the atmosphere. Deep convection is believed to be a major source for these waves and hurricanes may be particularly long-lived and intense sources. Simulations of Hurricane Humberto have been studied using the Pennsylvania State University-National Center for Atmospheric Research (PSU-NCAR) fifth-generation Mesoscale Model (MM5). Humberto is simulated at both tropical storm and hurricane stages. Information about gravity waves and their sources can be inferred from horizontal wind and temperature variances in the troposphere and lower stratosphere. Both Fast Fourier Transform (FFT) and Wavelet analyses are employed to investigate wave properties and behavior in the lower stratosphere. FFT analysis gives an overall view of storm affects while Wavelet analysis gives a local picture of gravity wave activity. It is found that a hurricane can be a significant source of deep heating which actively triggers gravity waves from the hot tower region of the storm eye wall. Convectively generated gravity waves are observed in the lower stratosphere of this model with horizontal scales of 10-250 km, vertical scales around 5 km and with intrinsic periods of approximately 20 minutes. Some specific characteristics of gravity waves found above the storm will be presented along with further discussion from the wave activity observed with the model. Deep convection over the oceans is thought to play a key role in atmospheric forcing via the creation of vertically propagating gravity waves and hurricane induced gravity waves may play a role in stratospheric forcing during the hurricane season.

  11. Architecture of thermal adaptation in an Exiguobacterium sibiricum strain isolated from 3 million year old permafrost: A genome and transcriptome approach

    PubMed Central

    Rodrigues, Debora F; Ivanova, Natalia; He, Zhili; Huebner, Marianne; Zhou, Jizhong; Tiedje, James M

    2008-01-01

    Background Many microorganisms have a wide temperature growth range and versatility to tolerate large thermal fluctuations in diverse environments, however not many have been fully explored over their entire growth temperature range through a holistic view of its physiology, genome, and transcriptome. We used Exiguobacterium sibiricum strain 255-15, a psychrotrophic bacterium from 3 million year old Siberian permafrost that grows from -5°C to 39°C to study its thermal adaptation. Results The E. sibiricum genome has one chromosome and two small plasmids with a total of 3,015 protein-encoding genes (CDS), and a GC content of 47.7%. The genome and transcriptome analysis along with the organism's known physiology was used to better understand its thermal adaptation. A total of 27%, 3.2%, and 5.2% of E. sibiricum CDS spotted on the DNA microarray detected differentially expressed genes in cells grown at -2.5°C, 10°C, and 39°C, respectively, when compared to cells grown at 28°C. The hypothetical and unknown genes represented 10.6%, 0.89%, and 2.3% of the CDS differentially expressed when grown at -2.5°C, 10°C, and 39°C versus 28°C, respectively. Conclusion The results show that E. sibiricum is constitutively adapted to cold temperatures stressful to mesophiles since little differential gene expression was observed between 4°C and 28°C, but at the extremities of its Arrhenius growth profile, namely -2.5°C and 39°C, several physiological and metabolic adaptations associated with stress responses were observed. PMID:19019206

  12. Polymorphous Computing Architecture (PCA) Kernel Benchmark Measurements on the MIT Raw Microprocessor

    DTIC Science & Technology

    2006-06-14

    time-domain FIR filter. 8 4 Parallel time-domain FIR filter operations. 9 5 Usage of a 1 x 1 Raw chip for performing the FFT. 10 6 FFT Butterfly ...store intermediate butterfly results. Fast Fourier Transform. The outline of a FFT of length N can be v iewed as: for (log N phases) for (N...4 butterflies ) Butterfly () end end Even Phase Data Flow FFT 0 FFT 1 Odd Phase Data Flow FFT 0 FFT 1 c^> Input Stream

  13. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  14. Space Telecommunications Radio Architecture (STRS)

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  15. A New Blind Adaptive Array Antenna Based on CMA Criteria for M-Ary/SS Signals Suitable for Software Defined Radio Architecture

    NASA Astrophysics Data System (ADS)

    Kozuma, Miho; Sasaki, Atsushi; Kamiya, Yukihiro; Fujii, Takeo; Umebayashi, Kenta; Suzuki, Yasuo

    M-ary/SS is a version of Direct Sequence/Spread Spectrum (DS/SS) aiming to improve the spectral efficiency employing orthogonal codes. However, due to the auto-correlation property of the orthogonal codes, it is impossible to detect the symbol timing by observing correlator outputs. Therefore, conventionally, a preamble has been inserted in M-ary/SS, signals. In this paper, we propose a new blind adaptive array antenna for M-ary/SS systems that combines signals over the space axis without any preambles. It is surely an innovative approach for M-ary/SS. The performance is investigated through computer simulations.

  16. The Unified-FFT Method for Fast Solution of Integral Equations as Applied to Shielded-Domain Electromagnetic

    NASA Astrophysics Data System (ADS)

    Rautio, Brian

    Electromagnetic (EM) solvers are widely used within computer-aided design (CAD) to improve and ensure success of circuit designs. Unfortunately, due to the complexity of Maxwell's equations, they are often computationally expensive. While considerable progress has been made in the realm of speed-enhanced EM solvers, these fast solvers generally achieve their results through methods that introduce additional error components by way of geometric approximations, sparse-matrix approximations, multilevel decomposition of interactions, and more. This work introduces the new method, Unified-FFT (UFFT). A derivative of method of moments, UFFT scales as O(N log N), and achieves fast analysis by the unique combination of FFT-enhanced matrix fill operations (MFO) with FFT-enhanced matrix solve operations (MSO). In this work, two versions of UFFT are developed, UFFT-Precorrected (UFFT-P) and UFFT-Grid Totalizing (UFFT-GT). UFFT-P uses precorrected FFT for MSO and allows the use of basis functions that do not conform to a regular grid. UFFT-GT uses conjugate gradient FFT for MSO and features the capability of reducing the error of the solution down to machine precision. The main contribution of UFFT-P is a fast solver, which utilizes FFT for both MFO and MSO. It is demonstrated in this work to not only provide simulation results for large problems considerably faster than state of the art commercial tools, but also to be capable of simulating geometries which are too complex for conventional simulation. In UFFT-P these benefits come at the expense of a minor penalty to accuracy. UFFT-GT contains further contributions as it demonstrates that such a fast solver can be accurate to numerical precision as compared to a full, direct analysis. It is shown to provide even more algorithmic efficiency and faster performance than UFFT-P. UFFT-GT makes an additional contribution in that it is developed not only for planar geometries, but also for the case of multilayered dielectrics and

  17. [The architectural design of psychiatric care buildings].

    PubMed

    Dunet, Lionel

    2012-01-01

    The architectural design of psychiatric care buildings. In addition to certain "classic" creations, the Dunet architectural office has designed several units for difficult patients as well as a specially adapted hospitalisation unit. These creations which are demanding in terms of the organisation of care require close consultation with the nursing teams. Testimony of an architect who is particularly engaged in the universe of psychiatry.

  18. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  19. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  20. Box Architecture.

    ERIC Educational Resources Information Center

    Ham, Jan

    1998-01-01

    Project offers grades 3-8 students hands-on design practice creating built environments to solve a society-based architectural problem. Students plan buildings, draw floor plans, and make scale models of the structures that are then used in related interdisciplinary activities. (Author)

  1. Architectural Illusion.

    ERIC Educational Resources Information Center

    Doornek, Richard R.

    1990-01-01

    Presents a lesson plan developed around the work of architectural muralist Richard Haas. Discusses the significance of mural painting and gives key concepts for the lesson. Lists class activities for the elementary and secondary grades. Provides a photograph of the Haas mural on the Fountainbleau Hilton Hotel, 1986. (GG)

  2. Architecture? Absolutely!

    ERIC Educational Resources Information Center

    Progressive Architecture, 1973

    1973-01-01

    By designing processes to translate social needs into physical terms, the Urban Center at the University of Louisville is turning out its own unique brand of architecture -- one that produces no buildings but that has a real effect on the future of the physical environment. (Author)

  3. Student Assessment in Architecture Schools.

    ERIC Educational Resources Information Center

    Dinham, Sarah M.

    Definitions, issues, and concerns in efforts to document the quality and outcomes of undergraduate education are reviewed, and the University of Arizona assessment model is summarized to illustrate a comprehensive assessment plan suitable for a research university. The Arizona model is adapted to architectural education, and the special…

  4. Energy Conservation through Architectural Design

    ERIC Educational Resources Information Center

    Thomson, Robert C., Jr.

    1977-01-01

    Describes a teaching unit designed to create in students an awareness of and an appreciation for the possibilities for energy conservation as they relate to architecture. It is noted that the unit can be adapted for use in many industrial programs and with different teaching methods due to the variety of activities that can be used. (Editor/TA)

  5. Changing School Architecture in Zurich

    ERIC Educational Resources Information Center

    Ziegler, Mark; Kurz, Daniel

    2008-01-01

    Changes in the way education is delivered has contributed to the evolution of school architecture in Zurich, Switzerland. The City of Zurich has revised its guidelines for designing school buildings, both new and old. Adapting older buildings to today's needs presents a particular challenge. The authors explain what makes up a good school building…

  6. Textural analyses of carbon fiber materials by 2D-FFT of complex images obtained by high frequency eddy current imaging (HF-ECI)

    NASA Astrophysics Data System (ADS)

    Schulze, Martin H.; Heuer, Henning

    2012-04-01

    Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.

  7. Shaping plant architecture

    PubMed Central

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  8. Comparison between digital Doppler filtering processes applied to radar signals

    NASA Astrophysics Data System (ADS)

    Desodt, G.

    1983-10-01

    Two families of Doppler processes based on FFT and FIR filters, respectively, are compared in terms of hardware complexity and performance. It is shown that FIR filter banks are characterized by better performance than FFT filter banks. For the same number of pulses, the FIR processor permits a better clutter rejection and greater bandwidth than the FFT one. Also, an FIR-based bank has a much simpler and more adaptable architecture than an FFT-based bank.

  9. A novel FFT/IFFT based peak-to-average power reduction method for OFDM communication systems using tone reservation

    NASA Astrophysics Data System (ADS)

    Besong, Samuel Oru; Yu, Xiaoyou; Li, Bin; Hou, Weibing; Wang, Xiaochun

    2011-10-01

    One of the main drawbacks of OFDM systems is the high Peak-to-Average Power ratio, which could limit transmission efficiency and efficient use of HPA. In this paper we present a modified tone reservation scheme for PAPR reduction using FFT iterations to generate the tones. In this Scheme, the reserve tones are designed to both cancel peaks and slightly increase the average power to induce a better PAPR reduction..The tones are generated by means of 2 FFT operations and the process is sometimes iterated to achieve better PAPR reductions. This scheme achieves a significant PAPR reduction of at least 4.6dB when about 4% of the carriers are used as reserve tones and with even lesser iterations when simulated in an OFDM system.

  10. A direct phasing method based on the origin-free modulus sum function and the FFT algorithm. XII.

    PubMed

    Rius, Jordi; Crespi, Anna; Torrelles, Xavier

    2007-03-01

    An alternative way of refining phases with the origin-free modulus sum function S is shown that, instead of applying the tangent formula in sequential mode [Rius (1993). Acta Cryst. A49, 406-409], applies it in parallel mode with the help of the fast Fourier transform (FFT) algorithm. The test calculations performed on intensity data of small crystal structures at atomic resolution prove the convergence and hence the viability of the procedure. This new procedure called S-FFT is valid for all space groups and especially competitive for low-symmetry ones. It works well when the charge-density peaks in the crystal structure have the same sign, i.e. either positive or negative.

  11. The use of the FFT for the efficient solution of the problem of electromagnetic scattering by a body of revolution

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Mittra, Raj

    1990-01-01

    The enhancement of the computational efficiency of the body of revolution (BOR) scattering problem is discused with a view to making it practical for solving large-body problems. The problem of EM scattering by a perfectly conducting BOR is considered, although the methods can be extended to multilayered dielectric bodies as well. Typically, the generation of the elements of the moment method matrix consumes a major portion of the computational time. It is shown how this time can be significantly reduced by manipulating the expression for the matrix elements to permit efficient FFT computation. A technique for extracting the singularity of the Green function that appears within the integrands of the matrix diagonal is also presented, further enhancing the usefulness of the FFT. The computation time can thus be improved by at least an order of magnitude for large bodies in comparison to that for previous algorithms.

  12. The robustness of subcarrier-index modulation in 16-QAM CO-OFDM system with 1024-point FFT.

    PubMed

    Jan, Omar H A; Sandel, David; Puntsri, Kidsanapong; Al-Bermani, Ali; El-Darawy, Mohamed; Noé, Reinhold

    2012-12-17

    We present in numerical simulations the robustness of subcarrier index modulation (SIM) OFDM to combat laser phase noise. The ability of using DFB lasers with SIM-OFDM in 16-QAM CO-OFDM system with 1024-point FFT has been verified. Although SIM-OFDM has lower spectral efficiency compared to the conventional CO-OFDM system, it is a good candidate for 16-QAM CO-OFDM system with 1024-point FFT which uses a DFB laser of 1 MHz linewidth. In addition, we show the tolerance of SIM-OFDM for mitigation of fiber nonlinearities in long-haul CO-OFDM system. The simulation results show a significant penalty reduction, essentially that due to SPM.

  13. Grid-free 3D multiple spot generation with an efficient single-plane FFT-based algorithm.

    PubMed

    Engström, David; Frank, Anders; Backsten, Jan; Goksör, Mattias; Bengtsson, Jörgen

    2009-06-08

    Algorithms based on the fast Fourier transform (FFT) for the design of spot-generating computer generated holograms (CGHs) typically only make use of a few sample positions in the propagated field. We have developed a new design method that much better utilizes the information-carrying capacity of the sampled propagated field. In this way design tasks which are difficult to accomplish with conventional FFT-based design methods, such as spot positioning at non-sample positions and/or spot positioning in 3D, are solved as easily as any standard design task using a conventional method. The new design method is based on a projection optimization, similar to that in the commonly used Gerchberg-Saxton algorithm, and the vastly improved design freedom comes at virtually no extra computational cost compared to the conventional design. Several different design tasks were demonstrated experimentally with a liquid crystal spatial light modulator, showing highly accurate creation of the desired field distributions.

  14. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  15. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  16. Detection of apnea using a short-window FFT technique and an artificial neural network

    NASA Astrophysics Data System (ADS)

    Waldemark, Karina E.; Agehed, Kenneth I.; Lindblad, Thomas; Waldemark, Joakim T. A.

    1998-03-01

    Sleep apnea is characterized by frequent prolonged interruptions of breathing during sleep. This syndrome causes severe sleep disorders and is often responsible for development of other diseases such as heart problems, high blood pressure and daytime fatigue, etc. After diagnosis, sleep apnea is often successfully treated by applying positive air pressure (CPAP) to the mouth and nose. Although effective, the (CPAP) equipment takes up a lot of space and the connected mask causes a lot of inconvenience for the patients. This raised interest in developing new techniques for treatment of sleep apnea syndrome. Several studies have indicated that electrical stimulation of the hypoglossal nerve and muscle in the tongue may be a useful method for treating patients with severe sleep apnea. In order to be able to successfully prevent the occurrence of apnea it is necessary to have some technique for early and fast on-line detection or prediction of the apnea events. This paper suggests using measurements of respiratory airflow (mouth temperature). The signal processing for this task includes the use of a short window FFT technique and uses an artificial back propagation neural net to model or predict the occurrence of apneas. The results show that early detection of respiratory interruption is possible and that the delay time for this is small.

  17. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods.

  18. Doppler picture velocimetry applied to hypersonics: automated DPV fringe pattern analysis using the FFT method

    NASA Astrophysics Data System (ADS)

    Pichler, Alexander; George, Alfred; Seiler, Friedrich; Srulijes, Julio; Sauerwein, Berthold

    2009-10-01

    Doppler picture velocimetry (DPV) is a tool for visualizing and measuring the flow velocity distribution of tracer particles in a laser light sheet. A frequency sensitive Michelson interferometer, tuned for detecting the velocity distribution by the Doppler effect, visualizes the velocity information of tracer particles crossing an illuminating laser light sheet as interference fringe patterns. Many efforts have been done to evaluate best these DPV patterns, in order to obtain the frequency distribution and, by applying the Doppler formula, the velocity profile of the tracers. The first processing method, developed in 1982, relied on manual processing of the pictures by the user, due to the unavailability of suitable high performance picture processing algorithms. This drawback made DPV being considered as a rather time-consuming measurement technique with limited accuracy, compared to existing commercial velocity measurement systems (e.g. PIV). This is no more the state of the art: The new DPV analysis software, presented in this paper, allows automated processing of the interference fringe samples obtained by two images, a reference picture without frequency shift and a Doppler picture containing the frequency shift, using single beam velocimetry. Based on Fast Fourier transformation (FFT), the presented algorithm determines the corresponding velocity profile (in pseudo colours) within only a few seconds on a standard personal computer without user intervention.

  19. A FFT Method for the Quasiclassical Selection of Initial Ro-Vibrational States of Triatomic Molecules

    NASA Technical Reports Server (NTRS)

    Eaker, Charles W.; Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    This paper describes the use of an exact fast Fourier transform (FFT) method to prepare specified vibrational-rotational states of triatomic molecules. The method determines the Fourier coefficients needed to describe the coordinates and momenta of a vibrating-rotating triatomic molecule. Once the Fourier coefficients of a particular state are determined, it is possible to easily generate as many random sets of initial cartesian coordinates and momenta as desired. All the members of each set will correspond to the particular vibrational-rotational state selected. For example, in the case of the ground vibrational state of a non-rotating water molecule, the calculated actions of 100 sets of initial conditions produced actions within 0.001 h(bar) of the specified quantization values and energies within 5 cm(sup -1) of the semiclassical eigenvalue. The numerical procedure is straightforward for states in which all the fundamental frequencies are independent. However for states for which the fundamental frequencies become commensurate (resonance states), there are additional complications. In these cases it is necessary to determine a new set of "fundamental" frequencies and to modify the quantization conditions. Once these adjustments are made, good results are obtained for resonance states. The major problems are in labelling the large number of Fourier coefficients and the presence of regions of chaotic motion. Results are presented for the vibrational states of H2O and HCN and the ro-vibrational states of H2O.

  20. Arabidopsis pab1, a mutant with reduced anthocyanins in immature seeds from banyuls, harbors a mutation in the MATE transporter FFT.

    PubMed

    Kitamura, Satoshi; Oono, Yutaka; Narumi, Issay

    2016-01-01

    Forward genetics approaches have helped elucidate the anthocyanin biosynthetic pathway in plants. Here, we used the Arabidopsis banyuls (ban) mutant, which accumulates anthocyanins, instead of colorless proanthocyanidin precursors, in immature seeds. In contrast to standard screens for mutants lacking anthocyanins in leaves/stems, we mutagenized ban plants and screened for mutants showing differences in pigmentation of immature seeds. The pale banyuls1 (pab1) mutation caused reduced anthocyanin pigmentation in immature seeds compared with ban. Immature pab1 ban seeds contained less anthocyanins and flavonols than ban, but showed normal expression of anthocyanin biosynthetic genes. In contrast to pab1, introduction of a flavonol-less mutation into ban did not produce paler immature seeds. Map-based cloning showed that two independent pab1 alleles disrupted the MATE-type transporter gene FFT/DTX35. Complementation of pab1 with FFT confirmed that mutation in FFT causes the pab1 phenotype. During development, FFT promoter activity was detected in the seed-coat layers that accumulate flavonoids. Anthocyanins accumulate in the vacuole and FFT fused to GFP mainly localized in the vacuolar membrane. Heterologous expression of grapevine MATE-type anthocyanin transporter gene partially complemented the pab1 phenotype. These results suggest that FFT acts at the vacuolar membrane in anthocyanin accumulation in the Arabidopsis seed coat, and that our screening strategy can reveal anthocyanin-related genes that have not been found by standard screening.

  1. Exploration of Potential Future Fleet Architectures

    DTIC Science & Technology

    2005-07-01

    alternative architectures are those espoused by the OFT sponsoring office: flexibility, adaptability, agility, speed, and information dominance through...including naval forces, which we used. The OFT advocates flexibility, adaptability, agility, speed, and information dominance through networking...challenges and transnational threats. In future conflicts, the Navy has plans to expand strike power, realize information dominance , and transform methods

  2. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks

  3. Multiprocessor Adaptive Control Of A Dynamic System

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Hyland, David C.

    1995-01-01

    Architecture for fully autonomous digital electronic control system developed for use in identification and adaptive control of dynamic system. Architecture modular and hierarchical. Combines relatively simple, standardized processing units into complex parallel-processing subsystems. Although architecture based on neural-network concept, processing units themselves not neural networks; processing units implemented by programming of currently available microprocessors.

  4. Optical Neural Network Classifier Architectures

    DTIC Science & Technology

    1998-04-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and...function neural network based on a previously demonstrated binary-input version. The greyscale-input capability broadens the range of applications for...a reduced feature set of multiwavelet images to improve training times and discrimination capability of the neural network . The design uses a joint

  5. An Efficient Circulant MIMO Equalizer for CDMA Downlink: Algorithm and VLSI Architecture

    NASA Astrophysics Data System (ADS)

    Guo, Yuanbin; Zhang, Jianzhong(Charlie); McCain, Dennis; Cavallaro, Joseph R.

    2006-12-01

    We present an efficient circulant approximation-based MIMO equalizer architecture for the CDMA downlink. This reduces the direct matrix inverse (DMI) of size[InlineEquation not available: see fulltext.] with[InlineEquation not available: see fulltext.] complexity to some FFT operations with[InlineEquation not available: see fulltext.] complexity and the inverse of some[InlineEquation not available: see fulltext.] submatrices. We then propose parallel and pipelined VLSI architectures with Hermitian optimization and reduced-state FFT for further complexity optimization. Generic VLSI architectures are derived for the[InlineEquation not available: see fulltext.] high-order receiver from partitioned[InlineEquation not available: see fulltext.] submatrices. This leads to more parallel VLSI design with[InlineEquation not available: see fulltext.] further complexity reduction. Comparative study with both the conjugate-gradient and DMI algorithms shows very promising performance/complexity tradeoff. VLSI design space in terms of area/time efficiency is explored extensively for layered parallelism and pipelining with a Catapult C high-level-synthesis methodology.

  6. Evaluating architectural design review.

    PubMed

    Stamps, A E

    2000-02-01

    Architectural design review is a method of environmental management which is widely used by governmental agencies in both the United States and in Great Britain. Because design review is a governmental function, there is a major need to assess how well it works. Research covering over 29,000 respondents and 5,600 environmental scenes suggests that scientific protocols can be adapted to provide an accurate and efficient design review protocol. The protocol uses preference experiments to find the standardized mean difference [formula: see text] between a proposed project and a random sample of existing projects. Values of d will indicate whether the project will increase, maintain, or diminish the aesthetic merit of the sampled area. The protocol is illustrated by applying it to the case of design review for a single residence. Implications for further implementations are discussed.

  7. Bounds on the minimum number of data transfers in WFTA and FFT programs. [Winograd Fourier Transform Algorithms and Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Nawab, H.; Mcclellan, J. H.

    1979-01-01

    Bounds on the minimum number of data transfers (i.e., loads, stores, copies) required by WFTA and FFT programs are presented. The analysis is applicable to those general-purpose computers with M general processor registers, where M is equal to or greater than 4 but much less than the transform length. It is shown that the 1008-point WFTA requires about 21 percent more data transfers than the 1024-point radix-4 FFT; on the other hand, the 120-point WFTA has about the same number of data transfers as the mixed radix (4 x 4 x 4 x 2) version of the 128-point FFT and 22 percent fewer than the radix-2 version. Finally, comparisons of the 'total' program execution times (multiplications, additions, and data transfers, but not indexing or permutations) are presented.

  8. A comparative study on low-memory iterative solvers for FFT-based homogenization of periodic media

    NASA Astrophysics Data System (ADS)

    Mishra, Nachiketa; Vondřejc, Jaroslav; Zeman, Jan

    2016-09-01

    In this paper, we assess the performance of four iterative algorithms for solving non-symmetric rank-deficient linear systems arising in the FFT-based homogenization of heterogeneous materials defined by digital images. Our framework is based on the Fourier-Galerkin method with exact and approximate integrations that has recently been shown to generalize the Lippmann-Schwinger setting of the original work by Moulinec and Suquet from 1994. It follows from this variational format that the ensuing system of linear equations can be solved by general-purpose iterative algorithms for symmetric positive-definite systems, such as the Richardson, the Conjugate gradient, and the Chebyshev algorithms, that are compared here to the Eyre-Milton scheme - the most efficient specialized method currently available. Our numerical experiments, carried out for two-dimensional elliptic problems, reveal that the Conjugate gradient algorithm is the most efficient option, while the Eyre-Milton method performs comparably to the Chebyshev semi-iteration. The Richardson algorithm, equivalent to the still widely used original Moulinec-Suquet solver, exhibits the slowest convergence. Besides this, we hope that our study highlights the potential of the well-established techniques of numerical linear algebra to further increase the efficiency of FFT-based homogenization methods.

  9. Investigation of hidden periodic structures on SEM images of opal-like materials using FFT and IFFT.

    PubMed

    Stephant, Nicolas; Rondeau, Benjamin; Gauthier, Jean-Pierre; Cody, Jason A; Fritsch, Emmanuel

    2014-01-01

    We have developed a method to use fast Fourier transformation (FFT) and inverse fast Fourier transformation (IFFT) to investigate hidden periodic structures on SEM images. We focused on samples of natural, play-of-color opals that diffract visible light and hence are periodically structured. Conventional sample preparation by hydrofluoric acid etch was not used; untreated, freshly broken surfaces were examined at low magnification relative to the expected period of the structural features, and, the SEM was adjusted to get a very high number of pixels in the images. These SEM images were treated by software to calculate autocorrelation, FFT, and IFFT. We present how we adjusted SEM acquisition parameters for best results. We first applied our procedure on an SEM image on which the structure was obvious. Then, we applied the same procedure on a sample that must contain a periodic structure because it diffracts visible light, but on which no structure was visible on the SEM image. In both cases, we obtained clearly periodic patterns that allowed measurements of structural parameters. We also investigated how the irregularly broken surface interfered with the periodic structure to produce additional periodicity. We tested the limits of our methodology with the help of simulated images.

  10. A new FFT-based algorithm to compute Born radii in the generalized Born theory of biomolecule solvation

    SciTech Connect

    Cai Wei Xu Zhenli; Baumketner, Andrij

    2008-12-20

    In this paper, a new method for calculating effective atomic radii within the generalized Born (GB) model of implicit solvation is proposed, for use in computer simulations of biomolecules. First, a new formulation for the GB radii is developed, in which smooth kernels are used to eliminate the divergence in volume integrals intrinsic in the model. Next, the fast Fourier transform (FFT) algorithm is applied to integrate smoothed functions, taking advantage of the rapid spectral decay provided by the smoothing. The total cost of the proposed algorithm scales as O(N{sup 3}logN+M) where M is the number of atoms comprised in a molecule and N is the number of FFT grid points in one dimension, which depends only on the geometry of the molecule and the spectral decay of the smooth kernel but not on M. To validate our algorithm, numerical tests are performed for three solute models: one spherical object for which exact solutions exist and two protein molecules of differing size. The tests show that our algorithm is able to reach the accuracy of other existing GB implementations, while offering much lower computational cost.

  11. A new FFT-based algorithm to compute Born radii in the generalized Born theory of biomolecule solvation.

    PubMed

    Cai, Wei; Xu, Zhenli; Baumketner, Andrij

    2008-12-20

    In this paper, a new method for calculating effective atomic radii within the generalized Born (GB) model of implicit solvation is proposed, for use in computer simulations of bio-molecules. First, a new formulation for the GB radii is developed, in which smooth kernels are used to eliminate the divergence in volume integrals intrinsic in the model. Next, the Fast Fourier Transform (FFT) algorithm is applied to integrate smoothed functions, taking advantage of the rapid spectral decay provided by the smoothing. The total cost of the proposed algorithm scales as O(N(3)logN + M) where M is the number of atoms comprised in a molecule, and N is the number of FFT grid points in one dimension, which depends only on the geometry of the molecule and the spectral decay of the smooth kernel but not on M. To validate our algorithm, numerical tests are performed for three solute models: one spherical object for which exact solutions exist and two protein molecules of differing size. The tests show that our algorithm is able to reach the accuracy of other existing GB implementations, while offering much lower computational cost.

  12. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  13. Architectural Study of Adaptive Algorithms for Adaptive Beam Communication Antennas

    DTIC Science & Technology

    1988-07-01

    how the bit level similarities appear. Consider the multiplication of two n x n matrices, S - (sik) and H - ( hkj ) to form the matrix product Y = (y...ij) as in n y Z s"j hkj (i,j=1,2,...,n) (34)13j k-iil Without any loss of generality, Y may be considered to be made up of independent vectors y.. The

  14. Architecture as Design Study.

    ERIC Educational Resources Information Center

    Kauppinen, Heta

    1989-01-01

    Explores the use of analogies in architectural design, the importance of Gestalt theory and aesthetic cannons in understanding and being sensitive to architecture. Emphasizes the variation between public and professional appreciation of architecture. Notes that an understanding of architectural process enables students to improve the aesthetic…

  15. Architecture and Children.

    ERIC Educational Resources Information Center

    Taylor, Anne; Campbell, Leslie

    1988-01-01

    Describes "Architecture and Children," a traveling exhibition which visually involves children in architectural principles and historic styles. States that it teaches children about architecture, and through architecture it instills the basis for aesthetic judgment. Argues that "children learn best by concrete examples of ideas, not…

  16. Adaptive network countermeasures.

    SciTech Connect

    McClelland-Bane, Randy; Van Randwyk, Jamie A.; Carathimas, Anthony G.; Thomas, Eric D.

    2003-10-01

    This report describes the results of a two-year LDRD funded by the Differentiating Technologies investment area. The project investigated the use of countermeasures in protecting computer networks as well as how current countermeasures could be changed in order to adapt with both evolving networks and evolving attackers. The work involved collaboration between Sandia employees and students in the Sandia - California Center for Cyber Defenders (CCD) program. We include an explanation of the need for adaptive countermeasures, a description of the architecture we designed to provide adaptive countermeasures, and evaluations of the system.

  17. Space Telecommunications Radio Architecture (STRS): Technical Overview

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  18. An efficient hybrid MLFMA-FFT solver for the volume integral equation in case of sparse 3D inhomogeneous dielectric scatterers

    SciTech Connect

    Zaeytijd, J. de Bogaert, I.; Franchois, A.

    2008-07-01

    Electromagnetic scattering problems involving inhomogeneous objects can be numerically solved by applying a Method of Moments discretization to the volume integral equation. For electrically large problems, the iterative solution of the resulting linear system is expensive, both computationally and in memory use. In this paper, a hybrid MLFMA-FFT method is presented, which combines the fast Fourier transform (FFT) method and the High Frequency Multilevel Fast Multipole Algorithm (MLFMA) in order to reduce the cost of the matrix-vector multiplications needed in the iterative solver. The method represents the scatterers within a set of possibly disjoint identical cubic subdomains, which are meshed using a uniform cubic grid. This specific mesh allows for the application of FFTs to calculate the near interactions in the MLFMA and reduces the memory cost considerably, since the aggregation and disaggregation matrices of the MLFMA can be reused. Additional improvements to the general MLFMA framework, such as an extention of the FFT interpolation scheme of Sarvas et al. from the scalar to the vectorial case in combination with a more economical representation of the radiation patterns on the lowest level in vector spherical harmonics, are proposed and the choice of the subdomain size is discussed. The hybrid method performs better in terms of speed and memory use on large sparse configurations than both the FFT method and the HF MLFMA separately and it has lower memory requirements on general large problems. This is illustrated on a number of representative numerical test cases.

  19. FFT integration of instantaneous 3D pressure gradient fields measured by Lagrangian particle tracking in turbulent flows

    NASA Astrophysics Data System (ADS)

    Huhn, F.; Schanz, D.; Gesemann, S.; Schröder, A.

    2016-09-01

    Pressure gradient fields in unsteady flows can be estimated through flow measurements of the material acceleration in the fluid and the assumption of the governing momentum equation. In order to derive pressure from its gradient, almost exclusively two numerical methods have been used to spatially integrate the pressure gradient until now: first, direct path integration in the spatial domain, and second, the solution of the Poisson equation for pressure. Instead, we propose an alternative third method that integrates the pressure gradient field in Fourier space. Using a FFT function, the method is fast and easy to implement in programming languages for scientific computing. We demonstrate the accuracy of the integration scheme on a synthetic pressure field and apply it to an experimental example based on time-resolved material acceleration data from high-resolution Lagrangian particle tracking with the Shake-The-Box method.

  20. Pore size modulation in electrochemically etched macroporous p-type silicon monitored by FFT impedance spectroscopy and Raman scattering.

    PubMed

    Quiroga-González, Enrique; Carstensen, Jürgen; Glynn, Colm; O'Dwyer, Colm; Föll, Helmut

    2014-01-07

    The understanding of the mechanisms of macropore formation in p-type Si with respect to modulation of the pore diameter is still in its infancy. In the present work, macropores with significantly modulated diameters have been produced electrochemically in p-type Si. The effect of the current density and the amount of surfactant in the etching solution are shown to influence the modulation in pore diameter and morphology. Data obtained during the etching process by in situ FFT impedance spectroscopy correlate the pore diameter variation with certain time constants found in the kinetics of the dissolution process. Raman scattering and electron microscopy confirm the mesoscopic structure and roughening of the pore walls. Spectroscopic and microscopic methods confirm that the pore wall morphology is correlated with the conditions of pore modulation.

  1. Adaptive nonlinear flight control

    NASA Astrophysics Data System (ADS)

    Rysdyk, Rolf Theoduor

    1998-08-01

    Research under supervision of Dr. Calise and Dr. Prasad at the Georgia Institute of Technology, School of Aerospace Engineering. has demonstrated the applicability of an adaptive controller architecture. The architecture successfully combines model inversion control with adaptive neural network (NN) compensation to cancel the inversion error. The tiltrotor aircraft provides a specifically interesting control design challenge. The tiltrotor aircraft is capable of converting from stable responsive fixed wing flight to unstable sluggish hover in helicopter configuration. It is desirable to provide the pilot with consistency in handling qualities through a conversion from fixed wing flight to hover. The linear model inversion architecture was adapted by providing frequency separation in the command filter and the error-dynamics, while not exiting the actuator modes. This design of the architecture provides for a model following setup with guaranteed performance. This in turn allowed for convenient implementation of guaranteed handling qualities. A rigorous proof of boundedness is presented making use of compact sets and the LaSalle-Yoshizawa theorem. The analysis allows for the addition of the e-modification which guarantees boundedness of the NN weights in the absence of persistent excitation. The controller is demonstrated on the Generic Tiltrotor Simulator of Bell-Textron and NASA Ames R.C. The model inversion implementation is robustified with respect to unmodeled input dynamics, by adding dynamic nonlinear damping. A proof of boundedness of signals in the system is included. The effectiveness of the robustification is also demonstrated on the XV-15 tiltrotor. The SHL Perceptron NN provides a more powerful application, based on the universal approximation property of this type of NN. The SHL NN based architecture is also robustified with the dynamic nonlinear damping. A proof of boundedness extends the SHL NN augmentation with robustness to unmodeled actuator

  2. Role of System Architecture in Architecture in Developing New Drafting Tools

    NASA Astrophysics Data System (ADS)

    Sorguç, Arzu Gönenç

    In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.

  3. Middleware Architecture Evaluation for Dependable Self-managing Systems

    SciTech Connect

    Liu, Yan; Babar, Muhammad A.; Gorton, Ian

    2008-10-10

    Middleware provides infrastructure support for creating dependable software systems. A specific middleware implementation plays a critical role in determining the quality attributes that satisfy a system’s dependability requirements. Evaluating a middleware architecture at an early development stage can help to pinpoint critical architectural challenges and optimize design decisions. In this paper, we present a method and its application to evaluate middleware architectures, driven by emerging architecture patterns for developing self-managing systems. Our approach focuses on two key attributes of dependability, reliability and maintainability by means of fault tolerance and fault prevention. We identify the architectural design patterns necessary to build an adaptive self-managing architecture that is capable of preventing or recovering from failures. These architectural patterns and their impacts on quality attributes create the context for middleware evaluation. Our approach is demonstrated by an example application -- failover control of a financial application on an enterprise service bus.

  4. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  5. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  6. Distributed multiport memory architecture

    NASA Technical Reports Server (NTRS)

    Kohl, W. H. (Inventor)

    1983-01-01

    A multiport memory architecture is diclosed for each of a plurality of task centers connected to a command and data bus. Each task center, includes a memory and a plurality of devices which request direct memory access as needed. The memory includes an internal data bus and an internal address bus to which the devices are connected, and direct timing and control logic comprised of a 10-state ring counter for allocating memory devices by enabling AND gates connected to the request signal lines of the devices. The outputs of AND gates connected to the same device are combined by OR gates to form an acknowledgement signal that enables the devices to address the memory during the next clock period. The length of the ring counter may be effectively lengthened to any multiple of ten to allow for more direct memory access intervals in one repetitive sequence. One device is a network bus adapter which serially shifts onto the command and data bus, a data word (8 bits plus control and parity bits) during the next ten direct memory access intervals after it has been granted access. The NBA is therefore allocated only one access in every ten intervals, which is a predetermined interval for all centers. The ring counters of all centers are periodically synchronized by DMA SYNC signal to assure that all NBAs be able to function in synchronism for data transfer from one center to another.

  7. Adaptive Decentralized Control

    DTIC Science & Technology

    1985-04-01

    computational requirements and response time provide strong incentives for the use of distributed control architectures. The basic focus of our research is on...ADCON (for Adaptive Decentralized CONtrol) comes from the following observations about the current status of control theory . An important aspect of...decentralized control of completely known systems still has many unresolved issues and some basic problems are yet to be answered. Under these conditions

  8. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  9. AutoTT: automated detection and analysis of T-tubule architecture in cardiomyocytes.

    PubMed

    Guo, Ang; Song, Long-Sheng

    2014-06-17

    Cardiac transverse (T)-tubules provide a specialized structure for synchronization and stabilization of sarcoplasmic reticulum Ca(2+) release in healthy cardiomyocytes. The application of laser scanning confocal microscopy and the use of fluorescent lipophilic membrane dyes have boosted the discoveries that T-tubule remodeling is a significant factor contributing to cardiac contractile dysfunction. However, the analysis and quantification of the remodeling of T-tubules have been a challenge and remain inconsistent among different research laboratories. Fast Fourier transformation (FFT) is the major analysis method applied to calculate the spatial frequency spectrum, which is used to represent the regularity of T-tubule systems. However, this approach is flawed because the density of T-tubules as well as non-T-tubule signals in the images influence the spectrum power generated by FFT. Preprocessing of images and topological architecture extracting is necessary to remove non-T-tubule noise from the analysis. In addition, manual analysis of images is time consuming and prone to errors and investigator bias. Therefore, we developed AutoTT, an automated analysis program that incorporates image processing, morphological feature extraction, and FFT analysis of spectrum power. The underlying algorithm is implemented in MATLAB (The MathWorks, Natick, MA). The program outputs the densities of transversely oriented T-tubules and longitudinally oriented T-tubules, power spectrum of the overall T-tubule systems, and averaged spacing of T-tubules. We also combined the density and regularity of T-tubules to give an index of T-tubule integrity (TTint), which provides a global evaluation of T-tubule alterations. In summary, AutoTT provides a reliable, easy to use, and fast approach for analyzing myocyte T-tubules. This program can also be applied to measure the density and integrity of other cellular structures.

  10. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    NASA Technical Reports Server (NTRS)

    Barkeshli, Kasra; Volakis, John L.

    1991-01-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  11. gEMfitter: a highly parallel FFT-based 3D density fitting tool with GPU texture memory acceleration.

    PubMed

    Hoang, Thai V; Cavin, Xavier; Ritchie, David W

    2013-11-01

    Fitting high resolution protein structures into low resolution cryo-electron microscopy (cryo-EM) density maps is an important technique for modeling the atomic structures of very large macromolecular assemblies. This article presents "gEMfitter", a highly parallel fast Fourier transform (FFT) EM density fitting program which can exploit the special hardware properties of modern graphics processor units (GPUs) to accelerate both the translational and rotational parts of the correlation search. In particular, by using the GPU's special texture memory hardware to rotate 3D voxel grids, the cost of rotating large 3D density maps is almost completely eliminated. Compared to performing 3D correlations on one core of a contemporary central processor unit (CPU), running gEMfitter on a modern GPU gives up to 26-fold speed-up. Furthermore, using our parallel processing framework, this speed-up increases linearly with the number of CPUs or GPUs used. Thus, it is now possible to use routinely more robust but more expensive 3D correlation techniques. When tested on low resolution experimental cryo-EM data for the GroEL-GroES complex, we demonstrate the satisfactory fitting results that may be achieved by using a locally normalised cross-correlation with a Laplacian pre-filter, while still being up to three orders of magnitude faster than the well-known COLORES program.

  12. Fast acquisition of high resolution 4-D amide-amide NOESY with diagonal suppression, sparse sampling and FFT-CLEAN.

    PubMed

    Werner-Allen, Jon W; Coggins, Brian E; Zhou, Pei

    2010-05-01

    Amide-amide NOESY provides important distance constraints for calculating global folds of large proteins, especially integral membrane proteins with beta-barrel folds. Here, we describe a diagonal-suppressed 4-D NH-NH TROSY-NOESY-TROSY (ds-TNT) experiment for NMR studies of large proteins. The ds-TNT experiment employs a spin state selective transfer scheme that suppresses diagonal signals while providing TROSY optimization in all four dimensions. Active suppression of the strong diagonal peaks greatly reduces the dynamic range of observable signals, making this experiment particularly suitable for use with sparse sampling techniques. To demonstrate the utility of this method, we collected a high resolution 4-D ds-TNT spectrum of a 23kDa protein using randomized concentric shell sampling (RCSS), and we used FFT-CLEAN processing for further reduction of aliasing artifacts - the first application of these techniques to a NOESY experiment. A comparison of peak parameters in the high resolution 4-D dataset with those from a conventionally-sampled 3-D control spectrum shows an accurate reproduction of NOE crosspeaks in addition to a significant reduction in resonance overlap, which largely eliminates assignment ambiguity. Likewise, a comparison of 4-D peak intensities and volumes before and after application of the CLEAN procedure demonstrates that the reduction of aliasing artifacts by CLEAN does not systematically distort NMR signals.

  13. Further Development of the FFT-based Method for Atomistic Modeling of Protein Folding and Binding under Crowding: Optimization of Accuracy and Speed.

    PubMed

    Qin, Sanbo; Zhou, Huan-Xiang

    2014-07-08

    Recently, we (Qin, S.; Zhou, H. X. J. Chem. Theory Comput.2013, 9, 4633-4643) developed the FFT-based method for Modeling Atomistic Proteins-crowder interactions, henceforth FMAP. Given its potential wide use for calculating effects of crowding on protein folding and binding free energies, here we aimed to optimize the accuracy and speed of FMAP. FMAP is based on expressing protein-crowder interactions as correlation functions and evaluating the latter via fast Fourier transform (FFT). The numerical accuracy of FFT improves as the grid spacing for discretizing space is reduced, but at increasing computational cost. We sought to speed up FMAP calculations by using a relatively coarse grid spacing of 0.6 Å and then correcting for discretization errors. This strategy was tested for different types of interactions (hard-core repulsion, nonpolar attraction, and electrostatic interaction) and over a wide range of protein-crowder systems. We were able to correct for the numerical errors on hard-core repulsion and nonpolar attraction by an 8% inflation of atomic hard-core radii and on electrostatic interaction by a 5% inflation of the magnitudes of protein atomic charges. The corrected results have higher accuracy and enjoy a speedup of more than 100-fold over those obtained using a fine grid spacing of 0.15 Å. With this optimization of accuracy and speed, FMAP may become a practical tool for realistic modeling of protein folding and binding in cell-like environments.

  14. Grid Architecture 2

    SciTech Connect

    Taft, Jeffrey D.

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  15. Software architecture design domain

    SciTech Connect

    White, S.A.

    1996-12-31

    Software architectures can provide a basis for the capture and subsequent reuse of design knowledge. The goal of software architecture is to allow the design of a system to take place at a higher level of abstraction; a level concerned with components, connections, constraints, rationale. This architectural view of software adds a new layer of abstraction to the traditional design phase of software development. It has resulted in a flurry of activity towards techniques, tools, and architectural design languages developed specifically to assist with this activity. An analysis of architectural descriptions, even though they differ in notation, shows a common set of key constructs that are present across widely varying domains. These common aspects form a core set of constructs that should belong to any ADL in order to for the language to offer the ability to specify software systems at the architectural level. This analysis also revealed a second set of constructs which served to expand the first set thereby improving the syntax and semantics. These constructs are classified according to whether they provide representation and analysis support for architectures belonging to many varying application domains (domain-independent construct class) or to a particular application domain (domain-dependent constructs). This paper presents the constructs of these two classes, their placement in the architecture design domain and shows how they may be used to classify, select, and analyze proclaimed architectural design languages (ADLs).

  16. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  17. De-Architecturization

    ERIC Educational Resources Information Center

    Wines, James

    1975-01-01

    De-architecturization is art about architecture, a catalyst suggesting that public art does not have to respond to formalist doctrine; but rather, may evolve from the informational reservoirs of the city environment, where phenomenology and structure become the fabric of its existence. (Author/RK)

  18. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  19. The Technology of Architecture

    ERIC Educational Resources Information Center

    Reese, Susan

    2006-01-01

    This article discusses how career and technical education is helping students draw up plans for success in architectural technology. According to the College of DuPage (COD) in Glen Ellyn, Illinois, one of the two-year schools offering training in architectural technology, graduates have a number of opportunities available to them. They may work…

  20. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  1. Teaching American Indian Architecture.

    ERIC Educational Resources Information Center

    Winchell, Dick

    1991-01-01

    Reviews "Native American Architecture," by Nabokov and Easton, an encyclopedic work that examines technology, climate, social structure, economics, religion, and history in relation to house design and the "meaning" of space among tribes of nine regions. Describes this book's use in a college course on Native American architecture. (SV)

  2. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  3. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    support for the Intel MIC architecture, the Apple Clang/LLVM C++ compiler is supported on both Linux and Darwin , and ESMF’s dependency on the NetCDF C...compiler on both Linux and Darwin systems. • Support was added to compile the ESMF library for the Intel MIC architecture under Linux. This allows

  4. Applying neuroscience to architecture.

    PubMed

    Eberhard, John P

    2009-06-25

    Architectural practice and neuroscience research use our brains and minds in much the same way. However, the link between neuroscience knowledge and architectural design--with rare exceptions--has yet to be made. The concept of linking these two fields is a challenge worth considering.

  5. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  6. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  7. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  8. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  9. Methodology requirements for intelligent systems architecture

    NASA Technical Reports Server (NTRS)

    Grant, Terry; Colombano, Silvano

    1987-01-01

    The methodology required for the development of the 'intelligent system architecture' of distributed computer systems which integrate standard data processing capabilities with symbolic processing to provide powerful and highly autonomous adaptive processing capabilities must encompass three elements: (1) a design knowledge capture system, (2) computer-aided engineering, and (3) verification and validation metrics and tests. Emphasis must be put on the earliest possible definition of system requirements and the realistic definition of allowable system uncertainties. Methodologies must also address human factor issues.

  10. Fast notification architecture for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hahk

    2013-03-01

    In an emergency, since it is vital to transmit the message to the users immediately after analysing the data to prevent disaster, this article presents the deployment of a fast notification architecture for a wireless sensor network. The sensor nodes of the proposed architecture can monitor an emergency situation periodically and transmit the sensing data, immediately to the sink node. We decide on the grade of fire situation according to the decision rule using the sensing values of temperature, CO, smoke density and temperature increasing rate. On the other hand, to estimate the grade of air pollution, the sensing data, such as dust, formaldehyde, NO2, CO2, is applied to the given knowledge model. Since the sink node in the architecture has a ZigBee interface, it can transmit the alert messages in real time according to analysed results received from the host server to the terminals equipped with a SIM card-type ZigBee module. Also, the host server notifies the situation to the registered users who have cellular phone through short message service server of the cellular network. Thus, the proposed architecture can adapt an emergency situation dynamically compared to the conventional architecture using video processing. In the testbed, after generating air pollution and fire data, the terminal receives the message in less than 3 s. In the test results, this system can also be applied to buildings and public areas where many people gather together, to prevent unexpected disasters in urban settings.

  11. Context Aware Middleware Architectures: Survey and Challenges

    PubMed Central

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988

  12. Context Aware Middleware Architectures: Survey and Challenges.

    PubMed

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-08-20

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

  13. A global distributed storage architecture

    NASA Technical Reports Server (NTRS)

    Lionikis, Nemo M.; Shields, Michael F.

    1996-01-01

    NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.

  14. Fractal Geometry of Architecture

    NASA Astrophysics Data System (ADS)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  15. Assured Mission Support Space Architecture (AMSSA) study

    NASA Technical Reports Server (NTRS)

    Hamon, Rob

    1993-01-01

    The assured mission support space architecture (AMSSA) study was conducted with the overall goal of developing a long-term requirements-driven integrated space architecture to provide responsive and sustained space support to the combatant commands. Although derivation of an architecture was the focus of the study, there are three significant products from the effort. The first is a philosophy that defines the necessary attributes for the development and operation of space systems to ensure an integrated, interoperable architecture that, by design, provides a high degree of combat utility. The second is the architecture itself; based on an interoperable system-of-systems strategy, it reflects a long-range goal for space that will evolve as user requirements adapt to a changing world environment. The third product is the framework of a process that, when fully developed, will provide essential information to key decision makers for space systems acquisition in order to achieve the AMSSA goal. It is a categorical imperative that military space planners develop space systems that will act as true force multipliers. AMSSA provides the philosophy, process, and architecture that, when integrated with the DOD requirements and acquisition procedures, can yield an assured mission support capability from space to the combatant commanders. An important feature of the AMSSA initiative is the participation by every organization that has a role or interest in space systems development and operation. With continued community involvement, the concept of the AMSSA will become a reality. In summary, AMSSA offers a better way to think about space (philosophy) that can lead to the effective utilization of limited resources (process) with an infrastructure designed to meet the future space needs (architecture) of our combat forces.

  16. Beethoven: architecture for media telephony

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Ohtonen, Timo; Sauvola, Jaakko J.

    1999-11-01

    This paper presents a new architecture and techniques for media-based telephony over wireless/wireline IP networks, called `Beethoven'. The platform supports complex media transport and mobile conferencing for multi-user environments having a non-uniform access. New techniques are presented to provide advanced multimedia call management over different media types and their presentation. The routing and distribution of the media is rendered over the standards based protocol. Our approach offers a generic, distributed and object-oriented solution having interfaces, where signal processing and unified messaging algorithms are embedded as instances of core classes. The platform services are divided into `basic communication', `conferencing' and `media session'. The basic communication form platform core services and supports access from scalable user interface to network end-points. Conferencing services take care of media filter adaptation, conversion, error resiliency, multi-party connection and event signaling, while the media session services offer resources for application-level communication between the terminals. The platform allows flexible attachment of any number of plug-in modules, and thus we use it as a test bench for multiparty/multi-point conferencing and as an evaluation bench for signal coding algorithms. In tests, our architecture showed the ability to easily be scaled from simple voice terminal to complex multi-user conference sharing virtual data.

  17. Decentralized and Modular Electrical Architecture

    NASA Astrophysics Data System (ADS)

    Elisabelar, Christian; Lebaratoux, Laurence

    2014-08-01

    This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.

  18. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  19. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  20. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  1. Using Open Systems Architecture to Revolutionize Capability Acquisition

    DTIC Science & Technology

    2015-05-13

    Unlocking Potential 1 Using Open Systems Architecture to Revolutionize Capability Acquisition Nickolas Guertin, PE DASN RDT&E, Director for...REPORT DATE 13 MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Using Open Systems Architecture to...in adaptability Integrate & Deliver • Manage Platform-unique elements • Reduce acquisition cost & risk • Spur innovation & enhance

  2. Look and Do Ancient Greece. Teacher's Manual: Primary Program, Greek Art & Architecture [and] Workbook: The Art and Architecture of Ancient Greece [and] K-4 Videotape. History through Art and Architecture.

    ERIC Educational Resources Information Center

    Luce, Ann Campbell

    This resource, containing a teacher's manual, reproducible student workbook, and a color teaching poster, is designed to accompany a 21-minute videotape program, but may be adapted for independent use. Part 1 of the program, "Greek Architecture," looks at elements of architectural construction as applied to Greek structures, and…

  3. Operational Concepts for a Generic Space Exploration Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Vaden, Karl R.; Jones, Robert E.; Roberts, Anthony M.

    2015-01-01

    This document is one of three. It describes the Operational Concept (OpsCon) for a generic space exploration communication architecture. The purpose of this particular document is to identify communication flows and data types. Two other documents accompany this document, a security policy profile and a communication architecture document. The operational concepts should be read first followed by the security policy profile and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes: subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.

  4. Security Policy for a Generic Space Exploration Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sheehe, Charles J.; Vaden, Karl R.

    2016-01-01

    This document is one of three. It describes various security mechanisms and a security policy profile for a generic space-based communication architecture. Two other documents accompany this document- an Operations Concept (OpsCon) and a communication architecture document. The OpsCon should be read first followed by the security policy profile described by this document and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.

  5. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  6. Construct a Management Architecture

    DTIC Science & Technology

    2008-10-23

    Task: Consider management architecture options that better align functional and budget responsibility consistent with comprehensive strategic ... planning . Scope the leadership hierarchy to the appropriate management responsibilities, reduce layers of management and move decision making closer to issue identification.

  7. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  8. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  9. Calculation of spherical harmonics and Wigner d functions by FFT. Applications to fast rotational matching in molecular replacement and implementation into AMoRe.

    PubMed

    Trapani, Stefano; Navaza, Jorge

    2006-07-01

    The FFT calculation of spherical harmonics, Wigner D matrices and rotation function has been extended to all angular variables in the AMoRe molecular replacement software. The resulting code avoids singularity issues arising from recursive formulas, performs faster and produces results with at least the same accuracy as the original code. The new code aims at permitting accurate and more rapid computations at high angular resolution of the rotation function of large particles. Test calculations on the icosahedral IBDV VP2 subviral particle showed that the new code performs on the average 1.5 times faster than the original code.

  10. Using overlay network architectures for scalable video distribution

    NASA Astrophysics Data System (ADS)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  11. An Architecture for Continuous Data Quality Monitoring in Medical Centers.

    PubMed

    Endler, Gregor; Schwab, Peter K; Wahl, Andreas M; Tenschert, Johannes; Lenz, Richard

    2015-01-01

    In the medical domain, data quality is very important. Since requirements and data change frequently, continuous and sustainable monitoring and improvement of data quality is necessary. Working together with managers of medical centers, we developed an architecture for a data quality monitoring system. The architecture enables domain experts to adapt the system during runtime to match their specifications using a built-in rule system. It also allows arbitrarily complex analyses to be integrated into the monitoring cycle. We evaluate our architecture by matching its components to the well-known data quality methodology TDQM.

  12. Architectural Knowledge in an SOA Infrastructure Reference Architecture

    NASA Astrophysics Data System (ADS)

    Zimmermann, Olaf; Kopp, Petra; Pappe, Stefan

    In this chapter, we present an industrial case study for the creation and usage of architectural knowledge. We first introduce the business domain, service portfolio, and knowledge management approach of the company involved in the case. Next, we introduce a Service-Oriented Architecture (SOA) infrastructure reference architecture as a primary carrier of architectural knowledge in this company. Moreover, we present how we harvested architectural knowledge from industry projects to create this reference architecture. We also present feedback from early reference architecture users. Finally, we conclude and give an outlook to future work.

  13. Software Architecture Evolution

    DTIC Science & Technology

    2013-12-01

    viii Contents 1 Introduction 1 1.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivating example...211 xii 1 Introduction Architectural change is commonplace in real-world software systems. However, today’s software architects have few tools to help...the target architecture (the intended design to which the system must evolve) 1 1 Introduction are known. In fact, of course, this is often not the case

  14. A Modular Robotic Architecture

    DTIC Science & Technology

    1990-11-01

    DATES COVERED AD-A232 007 Januar 1991 professional paper5 FUNOING NUMBERS A MODULAR ROBOTIC ARCHITECTURE PR: ZE92 WU: DN300029 PE: 0602936N - S. AUTHOR...mobile robots will help alleviate these problems, and, if made widely available, will promote standardization and compatibility among systems throughout...the industry. The Modular Robotic Architecture (MRA) is a generic control system that meets the above needs by providing developers with a standard set

  15. Embedded Instrumentation Systems Architecture

    DTIC Science & Technology

    2009-03-01

    and continuous test and evaluation. The architecture can also be useful in monitoring, diagnostics, and health management, as well as protection in...section describes the demonstration platform used in the effort to validate the first reference instantiation of the architecture. It included...advantage of the IEEE 1451 family of standards for smart sensors and transducers (Lee and Song 2003; Song and Lee 2006). The EI Node uses the IEEE 1451.X

  16. GTE: a new FFT based software to compute terrain correction on airborne gravity surveys in spherical approximation.

    NASA Astrophysics Data System (ADS)

    Capponi, Martina; Sampietro, Daniele; Sansò, Fernando

    2016-04-01

    The computation of the vertical attraction due to the topographic masses (Terrain Correction) is still a matter of study both in geodetic as well as in geophysical applications. In fact it is required in high precision geoid estimation by the remove-restore technique and it is used to isolate the gravitational effect of anomalous masses in geophysical exploration. This topographical effect can be evaluated from the knowledge of a Digital Terrain Model in different ways: e.g. by means of numerical integration, by prisms, tesseroids, polyedra or Fast Fourier Transform (FFT) techniques. The increasing resolution of recently developed digital terrain models, the increasing number of observation points due to extensive use of airborne gravimetry and the increasing accuracy of gravity data represents nowadays major issues for the terrain correction computation. Classical methods such as prism or point masses approximations are indeed too slow while Fourier based techniques are usually too approximate for the required accuracy. In this work a new software, called Gravity Terrain Effects (GTE), developed in order to guarantee high accuracy and fast computation of terrain corrections is presented. GTE has been thought expressly for geophysical applications allowing the computation not only of the effect of topographic and bathymetric masses but also those due to sedimentary layers or to the Earth crust-mantle discontinuity (the so called Moho). In the present contribution we summarize the basic theory of the software and its practical implementation. Basically the GTE software is based on a new algorithm which, by exploiting the properties of the Fast Fourier Transform, allows to quickly compute the terrain correction, in spherical approximation, at ground or airborne level. Some tests to prove its performances are also described showing GTE capability to compute high accurate terrain corrections in a very short time. Results obtained for a real airborne survey with GTE

  17. FFT-SOLUTION®

    EPA Pesticide Factsheets

    Technical product bulletin: this dispersant for oil spill cleanups should be applied as droplets. Manufacturer recommends that spray tips on booms be changed into solid tubing for injection of product 3 to 5 feet below water surface.

  18. Visual Adaptation

    PubMed Central

    Webster, Michael A.

    2015-01-01

    Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985

  19. Renal adaptation during hibernation.

    PubMed

    Jani, Alkesh; Martin, Sandra L; Jain, Swati; Keys, Daniel; Edelstein, Charles L

    2013-12-01

    Hibernators periodically undergo profound physiological changes including dramatic reductions in metabolic, heart, and respiratory rates and core body temperature. This review discusses the effect of hypoperfusion and hypothermia observed during hibernation on glomerular filtration and renal plasma flow, as well as specific adaptations in renal architecture, vasculature, the renin-angiotensin system, and upregulation of possible protective mechanisms during the extreme conditions endured by hibernating mammals. Understanding the mechanisms of protection against organ injury during hibernation may provide insights into potential therapies for organ injury during cold storage and reimplantation during transplantation.

  20. Renal adaptation during hibernation

    PubMed Central

    Martin, Sandra L.; Jain, Swati; Keys, Daniel; Edelstein, Charles L.

    2013-01-01

    Hibernators periodically undergo profound physiological changes including dramatic reductions in metabolic, heart, and respiratory rates and core body temperature. This review discusses the effect of hypoperfusion and hypothermia observed during hibernation on glomerular filtration and renal plasma flow, as well as specific adaptations in renal architecture, vasculature, the renin-angiotensin system, and upregulation of possible protective mechanisms during the extreme conditions endured by hibernating mammals. Understanding the mechanisms of protection against organ injury during hibernation may provide insights into potential therapies for organ injury during cold storage and reimplantation during transplantation. PMID:24049148

  1. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing.

  2. The architectural relevance of cybernetics

    SciTech Connect

    Frazer, J.H.

    1993-12-31

    This title is taken from an article by Gordon Pask in Architectural Design September 1969. It raises a number of questions which this article attempts to answer. How did Gordon come to be writing for an architectural publication? What was his contribution to architecture? How does he now come to be on the faculty of a school of architecture? And what indeed is the architectural relevance of cybernetics? 12 refs.

  3. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  4. An FFT-based method for modeling protein folding and binding under crowding: benchmarking on ellipsoidal and all-atom crowders

    PubMed Central

    Qin, Sanbo; Zhou, Huan-Xiang

    2013-01-01

    It is now well recognized that macromolecular crowding can exert significant effects on protein folding and binding stability. In order to calculate such effects in direct simulations of proteins mixed with bystander macromolecules, the latter (referred to as crowders) are usually modeled as spheres and the proteins represented at a coarse-grained level. Our recently developed postprocessing approach allows the proteins to be represented at the all-atom level but, for computational efficiency, has only been implemented for spherical crowders. Modeling crowder molecules in cellular environments and in vitro experiments as spheres may distort their effects on protein stability. Here we present a new method that is capable for treating aspherical crowders. The idea, borrowed from protein-protein docking, is to calculate the excess chemical potential of the proteins in crowded solution by fast Fourier transform (FFT). As the first application, we studied the effects of ellipsoidal crowders on the folding and binding free energies of all-atom proteins, and found, in agreement with previous direct simulations with coarse-grained protein models, that the aspherical crowders exert greater stabilization effects than spherical crowders of the same volume. Moreover, as demonstrated here, the FFT-based method has the important property that its computational cost does not increase strongly even when the level of details in representing the crowders is increased all the way to all-atom, thus significantly accelerating realistic modeling of protein folding and binding in cell-like environments. PMID:24187527

  5. Error and Complexity Analysis for a Collocation-Grid-Projection Plus Precorrected-FFT Algorithm for Solving Potential Integral Equations with LaPlace or Helmholtz Kernels

    NASA Technical Reports Server (NTRS)

    Phillips, J. R.

    1996-01-01

    In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.

  6. An FFT-based method for modeling protein folding and binding under crowding: benchmarking on ellipsoidal and all-atom crowders.

    PubMed

    Qin, Sanbo; Zhou, Huan-Xiang

    2013-10-01

    It is now well recognized that macromolecular crowding can exert significant effects on protein folding and binding stability. In order to calculate such effects in direct simulations of proteins mixed with bystander macromolecules, the latter (referred to as crowders) are usually modeled as spheres and the proteins represented at a coarse-grained level. Our recently developed postprocessing approach allows the proteins to be represented at the all-atom level but, for computational efficiency, has only been implemented for spherical crowders. Modeling crowder molecules in cellular environments and in vitro experiments as spheres may distort their effects on protein stability. Here we present a new method that is capable for treating aspherical crowders. The idea, borrowed from protein-protein docking, is to calculate the excess chemical potential of the proteins in crowded solution by fast Fourier transform (FFT). As the first application, we studied the effects of ellipsoidal crowders on the folding and binding free energies of all-atom proteins, and found, in agreement with previous direct simulations with coarse-grained protein models, that the aspherical crowders exert greater stabilization effects than spherical crowders of the same volume. Moreover, as demonstrated here, the FFT-based method has the important property that its computational cost does not increase strongly even when the level of details in representing the crowders is increased all the way to all-atom, thus significantly accelerating realistic modeling of protein folding and binding in cell-like environments.

  7. Blueprints in Sweden. Symptom load in Swedish adolescents in studies of Functional Family Therapy (FFT), Multisystemic Therapy (MST) and Multidimensional Treatment Foster Care (MTFC).

    PubMed

    Gustle, Lars-Henry; Hansson, Kjell; Sundell, Knut; Lundh, Lars-Gunnar; Löfholm, Cecilia Andrée

    2007-01-01

    The purpose of the present study was to compare symptom load in youth groups treated with three Swedish Blueprint programmes - Functional Family Therapy (FFT), Multisystemic Therapy (MST) and Multidimensional Treatment Foster Care (MTFC) - to see if symptom load matches the intensity of the treatment model as expected. These youth groups were also compared with in- and outpatients from child and adolescent psychiatry, and a normal comparison group. In addition, we compared the symptom load of their mothers. Symptom load was measured by the Achenbach System of Empirically Based Assessment (ASEBA) in the adolescents, and by the Symptom Checklist 90 in their mothers. The results showed that youth in the MST and MTFC studies had a higher symptom load than in the FFT study, and the same pattern of results was found in their mothers. It is concluded that there seems to be a reasonable correspondence between the offered resources and the symptom load among youth and parents; treatment methods with higher intensity have been offered to youth with higher symptom load. The correlation between internalized and externalized symptoms was high in all study groups. The MST and MTFC groups had an equally high total symptom load as the psychiatric inpatient sample.

  8. COREBA (cognition-oriented emergent behavior architecture)

    NASA Astrophysics Data System (ADS)

    Kwak, S. David

    2000-06-01

    Currently, many behavior implementation technologies are available for modeling human behaviors in Department of Defense (DOD) computerized systems. However, it is commonly known that any single currently adopted behavior implementation technology is not so capable of fully representing complex and dynamic human decision-making and cognition behaviors. The author views that the current situation can be greatly improved if multiple technologies are integrated within a well designed overarching architecture that amplifies the merits of each of the participating technologies while suppressing the limitations that are inherent with each of the technologies. COREBA uses an overarching behavior integration architecture that makes the multiple implementation technologies cooperate in a homogeneous environment while collectively transcending the limitations associated with the individual implementation technologies. Specifically, COREBA synergistically integrates Artificial Intelligence and Complex Adaptive System under Rational Behavior Model multi-level multi- paradigm behavior architecture. This paper will describe applicability of COREBA in DOD domain, behavioral capabilities and characteristics of COREBA and how the COREBA architectural integrates various behavior implementation technologies.

  9. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  10. Advanced ground station architecture

    NASA Technical Reports Server (NTRS)

    Zillig, David; Benjamin, Ted

    1994-01-01

    This paper describes a new station architecture for NASA's Ground Network (GN). The architecture makes efficient use of emerging technologies to provide dramatic reductions in size, operational complexity, and operational and maintenance costs. The architecture, which is based on recent receiver work sponsored by the Office of Space Communications Advanced Systems Program, allows integration of both GN and Space Network (SN) modes of operation in the same electronics system. It is highly configurable through software and the use of charged coupled device (CCD) technology to provide a wide range of operating modes. Moreover, it affords modularity of features which are optional depending on the application. The resulting system incorporates advanced RF, digital, and remote control technology capable of introducing significant operational, performance, and cost benefits to a variety of NASA communications and tracking applications.

  11. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  12. Color education in architecture

    NASA Astrophysics Data System (ADS)

    Unver, Rengin

    2002-06-01

    Architecture is an interdisciplinary profession that combines and uses the elements of various major fields such as humanities, social and physical sciences, technology and creative arts. The main aim of architectural education is to enable students acquire the skills to create designs sufficient both aesthetically and technically. The goals of the under graduate program can be summarized as; the information transfer on subjects and problems related to the application of the profession, the acquisition of relevant skills, and information on specialist subjects. Color is one of the most important design parameters every architect has to use. Architect candidates should be equipped in the field of color just as they are in other relevant subjects. This paper deals with the significance, goals, methods and the place of color education in the undergraduate program of architectural education.

  13. Processor architecture and data buffering

    NASA Technical Reports Server (NTRS)

    Mulder, Hans; Flynn, Michael J.

    1992-01-01

    A set of architectures from three major architecture families: stack, register, and memory-to-memory is discussed. It is shown that scalable architectures are not applicable for low-density technologies because they require at least 32 words of local memory. Software support is shown to be capable of bridging the performance gap between scalable and nonscalable architectures. A register architecture with 32 words of local memory allocated interprocedurally outperforms scalable architectures with equal sizes local memories and even some with larger size local memories. The performance advantage of unscalable architectures becomes significant when in addition to quality compile-time support, a small cache is added to an unscalable architecture. A 32-register architecture with 512 byte cache executes 20 percent less cycles when compared with an 8-set multiple overlapping set organization.

  14. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  15. Lunar architecture and urbanism

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    1992-01-01

    Human civilization and architecture have defined each other for over 5000 years on Earth. Even in the novel environment of space, persistent issues of human urbanism will eclipse, within a historically short time, the technical challenges of space settlement that dominate our current view. By adding modern topics in space engineering, planetology, life support, human factors, material invention, and conservation to their already renaissance array of expertise, urban designers can responsibly apply ancient, proven standards to the exciting new opportunities afforded by space. Inescapable facts about the Moon set real boundaries within which tenable lunar urbanism and its component architecture must eventually develop.

  16. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  17. MWAHCA: a multimedia wireless ad hoc cluster architecture.

    PubMed

    Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.

  18. Low Power Adder Based Auditory Filter Architecture

    PubMed Central

    Jayanthi, V. S.

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%. PMID:25506073

  19. UAV Cooperation Architectures for Persistent Sensing

    SciTech Connect

    Roberts, R S; Kent, C A; Jones, E D

    2003-03-20

    With the number of small, inexpensive Unmanned Air Vehicles (UAVs) increasing, it is feasible to build multi-UAV sensing networks. In particular, by using UAVs in conjunction with unattended ground sensors, a degree of persistent sensing can be achieved. With proper UAV cooperation algorithms, sensing is maintained even though exceptional events, e.g., the loss of a UAV, have occurred. In this paper a cooperation technique that allows multiple UAVs to perform coordinated, persistent sensing with unattended ground sensors over a wide area is described. The technique automatically adapts the UAV paths so that on the average, the amount of time that any sensor has to wait for a UAV revisit is minimized. We also describe the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture. This architecture is designed to help simulate and operate distributed sensor networks where multiple UAVs are used to collect data.

  20. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  1. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  2. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  3. Adaptive hybrid control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    Simple methods for the design of adaptive force and position controllers for robot manipulators within the hybrid control architecuture is presented. The force controller is composed of an adaptive PID feedback controller, an auxiliary signal and a force feedforward term, and it achieves tracking of desired force setpoints in the constraint directions. The position controller consists of adaptive feedback and feedforward controllers and an auxiliary signal, and it accomplishes tracking of desired position trajectories in the free directions. The controllers are capable of compensating for dynamic cross-couplings that exist between the position and force control loops in the hybrid control architecture. The adaptive controllers do not require knowledge of the complex dynamic model or parameter values of the manipulator or the environment. The proposed control schemes are computationally fast and suitable for implementation in on-line control with high sampling rates.

  4. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  5. Emulating an MIMD architecture

    SciTech Connect

    Su Bogong; Grishman, R.

    1982-01-01

    As part of a research effort in parallel processor architecture and programming, the ultracomputer group at New York University has performed extensive simulation of parallel programs. To speed up these simulations, a parallel processor emulator, using the microprogrammable Puma computer system previously designed and built at NYU, has been developed. 8 references.

  6. Can Architecture Be Taught?

    ERIC Educational Resources Information Center

    Kroll, Lucien; Mikellides, Byron

    1981-01-01

    The academic world is seen as remote from day-to-day reality. A practicing architect's experiences teaching architecture students at the Saint-Luc School in Brussels are described, in which role playing was used to bring reality to the classroom. (MLW)

  7. Digital transversal filter architecture

    NASA Astrophysics Data System (ADS)

    Greenberger, A. J.

    1985-01-01

    A fast and efficient architecture is described for the realization of a pipelined, fully parallel digital transversal filter in VLSI. The order of summation is changed such that no explicit multiplication is seen, gated accumulators are used, and the coefficients are circulated. Estimates for the number of transistors needed for a CMOS implementation are given.

  8. Information network architectures

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Graphs, charts, diagrams and outlines of information relative to information network architectures for advanced aerospace missions, such as the Space Station, are presented. Local area information networks are considered a likely technology solution. The principle needs for the network are listed.

  9. 1989 Architectural Exhibition Winners.

    ERIC Educational Resources Information Center

    School Business Affairs, 1990

    1990-01-01

    Winners of the 1989 Architectural Exhibition sponsored annually by the ASBO International's School Facilities Research Committee include the Brevard Performing Arts Center (Melbourne, Florida), the Capital High School (Santa Fe, New Mexico), Gage Elementary School (Rochester, Minnesota), the Lakewood (Ohio) High School Natatorium, and three other…

  10. GNU debugger internal architecture

    SciTech Connect

    Miller, P.; Nessett, D.; Pizzi, R.

    1993-12-16

    This document describes the internal and architecture and implementation of the GNU debugger, gdb. Topics include inferior process management, command execution, symbol table management and remote debugging. Call graphs for specific functions are supplied. This document is not a complete description but offers a developer an overview which is the place to start before modification.

  11. Symbolic Architectures for Cognition

    DTIC Science & Technology

    1989-01-01

    on the communitie- in which the are raised and reside (von Cranach, Foppa , Lepinie,. and Ploog 1070). Fhe addtiLunal capabilities tor low-level...Learning. Los Altos. CA: Morgan Kaufm X_ VanLehn. K., ed. 1989 Architectures for Inteligence. Hillsdale, NJ: Erlbaum. von Cranach. M., Foppa , K

  12. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  13. [Architecture, budget and dignity].

    PubMed

    Morel, Etienne

    2012-01-01

    Drawing on its dynamic strengths, a psychiatric unit develops various projects and care techniques. In this framework, the institute director must make a number of choices with regard to architecture. Why renovate the psychiatry building? What financial investments are required? What criteria should be followed? What if the major argument was based on the respect of the patient's dignity?

  14. [Architecture and movement].

    PubMed

    Rivallan, Armel

    2012-01-01

    Leading an architectural project means accompanying the movement which it induces within the teams. Between questioning, uncertainty and fear, the organisational changes inherent to the new facility must be subject to constructive and ongoing exchanges. Ethics, safety and training are revised and the unit projects are sometimes modified.

  15. INL Generic Robot Architecture

    SciTech Connect

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  16. Human Symbol Manipulation within an Integrated Cognitive Architecture

    ERIC Educational Resources Information Center

    Anderson, John R.

    2005-01-01

    This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children…

  17. India's Vernacular Architecture as a Reflection of Culture.

    ERIC Educational Resources Information Center

    Masalski, Kathleen Woods

    This paper contains the narrative for a slide presentation on the architecture of India. Through the narration, the geography and climate of the country and the social conditions of the Indian people are discussed. Roofs and windows are adapted for the hot, rainy climate, while the availability of building materials ranges from palm leaves to mud…

  18. A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Cwik, T.

    1994-01-01

    A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.

  19. 11. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster General's Office Standard Plan 82, sheet 1. Lithograph on linen architectural drawing. April 1893 3 ELEVATIONS, 3 PLANS AND A PARTIAL SECTION - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  20. 12. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster Generals Office Standard Plan 82, sheet 2, April 1893. Lithograph on linen architectural drawing. DETAILS - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  1. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  2. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  3. An epigenetic toolkit allows for diverse genome architectures in eukaryotes

    PubMed Central

    Maurer-Alcalá, Xyrus X.; Katz, Laura A.

    2015-01-01

    Genome architecture varies considerably among eukaryotes in terms of both size and structure (e.g. distribution of sequences within the genome, elimination of DNA during formation of somatic nuclei). The diversity in eukaryotic genome architectures and the dynamic processes that they undergo are only possible due to the well-developed nature of an epigenetic toolkit, which likely existed in the Last Eukaryotic Common Ancestor (LECA). This toolkit may have arisen as a means of navigating the genomic conflict that arose from the expansion of transposable elements within the ancestral eukaryotic genome. This toolkit has been coopted to support the dynamic nature of genomes in lineages across the eukaryotic tree of life. Here we highlight how the changes in genome architecture in diverse eukaryotes are regulated by epigenetic processes by focusing on DNA elimination, genome rearrangements, and adaptive changes to genome architecture. The ability to epigenetically modify and regulate genomes has contributed greatly to the diversity of eukaryotes observed today. PMID:26649755

  4. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme.

  5. An epigenetic toolkit allows for diverse genome architectures in eukaryotes.

    PubMed

    Maurer-Alcalá, Xyrus X; Katz, Laura A

    2015-12-01

    Genome architecture varies considerably among eukaryotes in terms of both size and structure (e.g. distribution of sequences within the genome, elimination of DNA during formation of somatic nuclei). The diversity in eukaryotic genome architectures and the dynamic processes are only possible due to the well-developed epigenetic toolkit, which probably existed in the Last Eukaryotic Common Ancestor (LECA). This toolkit may have arisen as a means of navigating the genomic conflict that arose from the expansion of transposable elements within the ancestral eukaryotic genome. This toolkit has been coopted to support the dynamic nature of genomes in lineages across the eukaryotic tree of life. Here we highlight how the changes in genome architecture in diverse eukaryotes are regulated by epigenetic processes, such as DNA elimination, genome rearrangements, and adaptive changes to genome architecture. The ability to epigenetically modify and regulate genomes has contributed greatly to the diversity of eukaryotes observed today.

  6. The MDS autonomous control architecture

    NASA Technical Reports Server (NTRS)

    Gat, E.

    2000-01-01

    We describe the autonomous control architecture for the JPL Mission Data System (MDS). MDS is a comprehensive new software infrastructure for supporting unmanned space exploration. The autonomous control architecture is one component of MDS designed to enable autonomous operations.

  7. Memory performance of Prolog architectures

    SciTech Connect

    Tick, E.

    1988-01-01

    Memory Performance of Prolog Architectures addresses these problems and reports dynamic data and instruction referencing characteristics of both sequential and parallel prolog architectures and corresponding uni-processor and multi-processor memory-hierarchy performance tradeoffs. Computer designers and logic programmers will find this work to be a valuable reference with many practical applications. Memory Performance of Prolog Architectures will also serve as an important textbook for graduate level courses in computer architecture and/or performance analysis.

  8. Root architecture remodeling induced by phosphate starvation.

    PubMed

    Sato, Aiko; Miura, Kenji

    2011-08-01

    Plants have evolved efficient strategies for utilizing nutrients in the soil in order to survive, grow, and reproduce. Inorganic phosphate (Pi) is a major macroelement source for plant growth; however, the availability and distribution of Pi are varying widely across locations. Thus, plants in many areas experience Pi deficiency. To maintain cellular Pi homeostasis, plants have developed a series of adaptive responses to facilitate external Pi acquisition, limit Pi consumption, and adjust Pi recycling internally under Pi starvation conditions. This review focuses on the molecular regulators that modulate Pi starvation-induced root architectural changes.

  9. An intelligent CNC machine control system architecture

    SciTech Connect

    Miller, D.J.; Loucks, C.S.

    1996-10-01

    Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications using platform-independent software.

  10. Using natural variation to investigate the function of individual amino acids in the sucrose-binding box of fructan:fructan 6G-fructosyltransferase (6G-FFT) in product formation.

    PubMed

    Ritsema, Tita; Verhaar, Auke; Vijn, Irma; Smeekens, Sjef

    2005-07-01

    Enzymes of the glycosyl hydrolase family 32 are highly similar with respect to primary sequence but catalyze divergent reactions. Previously, the importance of the conserved sucrose-binding box in determining product specificity of onion fructan:fructan 6G-fructosyltransferase (6G-FFT) was established [Ritsema et al., 2004, Plant Mol. Biol. 54: 853-863]. Onion 6G-FFT synthesizes the complex fructan neo-series inulin by transferring fructose residues to either a terminal fructose or a terminal glucose residue. In the present study we have elucidated the molecular determinants of product specificity by substitution of individual amino acids of the sucrose binding box with amino acids that are present on homologous positions in other fructosyltransferases or vacuolar invertases. Substituting the presumed nucleophile Asp85 of the beta-fructosidase motif resulted in an inactive enzyme. 6G-FFT mutants S87N and S87D did not change substrate or product specificities, whereas mutants N84Y and N84G resulted in an inactive enzyme. Most interestingly, mutants N84S, N84A, and N84Q added fructose residues preferably to a terminal fructose and hardly to the terminal glucose. This resulted in the preferential production of inulin-type fructans. Combining mutations showed that amino acid 84 determines product specificity of 6G-FFT irrespective of the amino acid at position 87.

  11. Cognitive Architectures for Multimedia Learning

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  12. Architectural Adventures in Your Community

    ERIC Educational Resources Information Center

    Henn, Cynthia A.

    2007-01-01

    Due to architecture's complexity, it can be challenging to develop lessons for the students, and consequently, the teaching of architecture is frequently overlooked. Every community has an architectural history. For example, the community in which the author's students live has a variety of historic houses from when the community originated (the…

  13. Synthesis and operation of an FFT-decoupled fixed-order reversed-field pinch plasma control system based on identification data

    NASA Astrophysics Data System (ADS)

    Olofsson, K. Erik J.; Brunsell, Per R.; Witrant, Emmanuel; Drake, James R.

    2010-10-01

    Recent developments and applications of system identification methods for the reversed-field pinch (RFP) machine EXTRAP T2R have yielded plasma response parameters for decoupled dynamics. These data sets are fundamental for a real-time implementable fast Fourier transform (FFT) decoupled discrete-time fixed-order strongly stabilizing synthesis as described in this work. Robustness is assessed over the data set by bootstrap calculation of the sensitivity transfer function worst-case H_{\\infty} -gain distribution. Output tracking and magnetohydrodynamic mode m = 1 tracking are considered in the same framework simply as two distinct weighted traces of a performance channel output-covariance matrix as derived from the closed-loop discrete-time Lyapunov equation. The behaviour of the resulting multivariable controller is investigated with dedicated T2R experiments.

  14. Architecture and Monumental (Study About form in Architecture)

    NASA Astrophysics Data System (ADS)

    Pane, I. F.; Suwantoro, H.; Zahrah, W.; Sianipar, R. A.

    2017-03-01

    The architecture develops along with the development of human history. So architecture is the field of study related to human physically and non-physically. The development of architecture is a long process within the culture the architecture develops. Physically, architecture has different shape from every historical phase. The different shape has different historical background. The important building on one period is always impressed. This impression still remains until now, in this postmodern era. From the phenomena appear in architecture so this study focused on the monumental buildings by analyzing the form of the building in this era. The objects of the study are the buildings in Medan which represent the monumental impression (Maimun Palace). The qualitative approach is applied to give more knowledge in history, theory, and criticsm of architecture. The results of this study described that the monumental impression of the object of study and forms of building support that impression.

  15. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  16. Instrumented Architectural Simulation System

    NASA Technical Reports Server (NTRS)

    Delagi, B. A.; Saraiya, N.; Nishimura, S.; Byrd, G.

    1987-01-01

    Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies, not just toy problems or small application fragements, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.

  17. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  18. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  19. Architectural-acoustics consulting

    NASA Astrophysics Data System (ADS)

    Hoover, Anthony K.

    2004-05-01

    Consulting involves both the science of acoustics and the art of communication, requiring an array of inherent and created skills. Perhaps because consulting on architectural acoustics is a relatively new field, there is a remarkable variety of career paths, all influenced by education, interest, and experience. Many consultants juggle dozens of chargeable projects at a time, not to mention proposals, seminars, teaching, articles, business concerns, and professional-society activities. This paper will discuss various aspects of career paths, projects, and clients as they relate to architectural-acoustics consulting. The intended emphasis will be considerations for those who may be interested in such a career, noting that consultants generally seem to thrive on the numerous challenges.

  20. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  1. Aerobot Autonomy Architecture

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  2. En-Gauging Architectures

    DTIC Science & Technology

    2004-10-01

    publication. APPROVED: /s/ RAYMOND A. LIUZZI Project Engineer FOR THE DIRECTOR: /s/ JAMES A. COLLINS...raise an implementation-level “port access failed exception” in the latter case. Notice that nothing about the architecture itself changed during...Crane et al] Crane, S., Dulay, N., Fossa, H., Kramer, J., Magee, J., Sloman , M., and Twidle, K.: “Configuration Management for Distributed Systems

  3. Irregular Applications: Architectures & Algorithms

    SciTech Connect

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  4. Survivable Loosely Coupled Architectures

    DTIC Science & Technology

    2003-03-01

    Verlag Lecture Notes in Computer Science, pages 291–303, Pune, India, September 2000. [6] Saddek Bensalem, Marius Bozga, Jean - Claude Fernandez, Lucian...IEEE Computer Society. [5] John Rushby. An overview of formal verification for the time-triggered architecture. In Werner Damm and Ernst-Rüdiger...Agathe Merceron. Parametric verification of a group membership algo- rithm. In Werner Damm and Ernst-Rüdiger Olderog, editors, Formal Techniques in Real

  5. Vetronics Reference Architecture

    DTIC Science & Technology

    2007-11-02

    Model 4 Captures system intelligence such that computational processes can be allocated to system processing components (e.g. human, robotic, man in the...loop) • Systems Architecture (Cross product of RA, TA, and Intelligent Domain Model ) 4 Defines interconnected systems components organized to...iterate iterate Requirements System Intelligent Domain Model Use Cases Need to focus on refining RA, TA, and Intelligent Domain Model to derive a

  6. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  7. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  8. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  9. Autonomous Organization-Based Adaptive Information Systems

    DTIC Science & Technology

    2005-01-01

    intentional Multi - agent System (MAS) approach [10]. While these approaches are functional AIS systems, they lack the ability to reorganize and adapt...extended a multi - agent system with a self- reorganizing architecture to create an autonomous, adaptive information system. Design Our organization-based...goals. An advantage of a multi - agent system using the organization theoretic model is its extensibility. The practical, numerical limits to the

  10. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  11. Adaptive-array Electron Cyclotron Emission diagnostics using data streaming in a Software Defined Radio system

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Hamasaki, M.; Fujisawa, A.; Nagashima, Y.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; the QUEST Team

    2016-04-01

    Measurement of the Electron Cyclotron Emission (ECE) spectrum is one of the most popular electron temperature diagnostics in nuclear fusion plasma research. A 2-dimensional ECE imaging system was developed with an adaptive-array approach. A radio-frequency (RF) heterodyne detection system with Software Defined Radio (SDR) devices and a phased-array receiver antenna was used to measure the phase and amplitude of the ECE wave. The SDR heterodyne system could continuously measure the phase and amplitude with sufficient accuracy and time resolution while the previous digitizer system could only acquire data at specific times. Robust streaming phase measurements for adaptive-arrayed continuous ECE diagnostics were demonstrated using Fast Fourier Transform (FFT) analysis with the SDR system. The emission field pattern was reconstructed using adaptive-array analysis. The reconstructed profiles were discussed using profiles calculated from coherent single-frequency radiation from the phase array antenna.

  12. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  13. Cognitive Architectures and Autonomy: A Comparative Review

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn; Helgasson, Helgi

    2012-05-01

    One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.

  14. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  15. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  16. Architectural Lessons: Look Back In Order To Move Forward

    NASA Astrophysics Data System (ADS)

    Huang, T.; Djorgovski, S. G.; Caltagirone, S.; Crichton, D. J.; Hughes, J. S.; Law, E.; Pilone, D.; Pilone, T.; Mahabal, A.

    2015-12-01

    True elegance of scalable and adaptable architecture is not about incorporating the latest and greatest technologies. Its elegance is measured by its ability to scale and adapt as its operating environment evolves over time. Architecture is the link that bridges people, process, policies, interfaces, and technologies. Architectural development begins by observe the relationships which really matter to the problem domain. It follows by the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use [C. Alexander, 1979]. Architects are the true artists. Like all masterpieces, the values and strength of architectures are measured not by the volumes of publications, it is measured by its ability to evolve. An architect must look back in order to move forward. This talk discusses some of the prior works including onboard data analysis system, knowledgebase system, cloud-based Big Data platform, as enablers to help shape the new generation of Earth Science projects at NASA and EarthCube where a community-driven architecture is the key to enable data-intensive science. [C. Alexander, The Timeless Way of Building, Oxford University, 1979.

  17. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  18. Emerging hierarchies in dynamically adapting webs

    NASA Astrophysics Data System (ADS)

    Katifori, Eleni; Graewer, Johannes; Magnasco, Marcelo; Modes, Carl

    Transport networks play a key role across four realms of eukaryotic life: slime molds, fungi, plants, and animals. In addition to the developmental algorithms that build them, many also employ adaptive strategies to respond to stimuli, damage, and other environmental changes. We model these adapting network architectures using a generic dynamical system on weighted graphs and find in simulation that these networks ultimately develop a hierarchical organization of the final weighted architecture accompanied by the formation of a system-spanning backbone. We quantify the hierarchical organization of the networks by developing an algorithm that decomposes the architecture to multiple scales and analyzes how the organization in each scale relates to that of the scale above and below it. The methodologies developed in this work are applicable to a wide range of systems including the slime mold physarum polycephalum, human microvasculature, and force chains in granular media.

  19. Teacher Adaptation to Open Learning Spaces

    ERIC Educational Resources Information Center

    Alterator, Scott; Deed, Craig

    2013-01-01

    The "open classroom" emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to…

  20. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  1. "Unwalling" the Classroom: Teacher Reaction and Adaptation

    ERIC Educational Resources Information Center

    Deed, Craig; Lesko, Thomas

    2015-01-01

    Modern open school architecture abstractly expresses ideas about choice, flexibility and autonomy. While open spaces express and authorise different teaching practice, these versions of school and classrooms present challenges to teaching routines and practice. This paper examines how teachers adapt as they move into new school buildings designed…

  2. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In

  3. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  4. Adaptive Optoelectronic Eyes: Hybrid Sensor/Processor Architectures

    DTIC Science & Technology

    2006-11-13

    physiology of vision, neurobiology, computational neuroscience, neural networks, the development and modeling of vision algorithms, VLSI device design...Hybrid Analog /Digital VLSI Design; Cellular Neural Network Designs; VLSI Chip Testing and Analysis; Active Pixel CMOS Sensor Arrays Prof. John O’Brien...representation analog /digital VLSI technology. In addition, the development of this integrated hybrid packaging technology has potential for a wide range

  5. Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)

    DTIC Science & Technology

    2007-04-01

    Distributed Control Applications, TTTech, http://www.vmars.tuwien.ac.at/projects/nexttta, http://www.tttech.com/ press /pressreleases.htm, Vienna, Austria...Bamieh, F. Paganini , and M. Dahleh, Distributed Control of Spatially Invariant Systems, IEEE Transaction on Automatic Control, 1998. [18] P. J

  6. Scalable Adaptive Architectures for Maritime Operations Center Command and Control

    DTIC Science & Technology

    2011-05-06

    and Kandel , 1993). On the other hand, a very small number of reported validation techniques are available that use quantitative validation (Lehner...1989; Smith and Kandel , 1993). The quantitative validation approach uses statistical methods to compare expert system’s performance against either...Melle, 1981; Suwa et ai, 1985; Stachowitz and Comb, 1987; Stickel, 1988; Ayel and Laurent, 1991; De Raedt et ai. 1991; Smith and Kandel , 1993) in

  7. Neurophysiological Research Supporting the Investigation of Adaptive Network Architectures

    DTIC Science & Technology

    1988-05-01

    invertebrate systems (Alkon, 1979; Barrionuevo and Brown, 1983; Byrne, 1987; Kandel, 1976; Kandel and Spencer, 1968; Woody, 1982a,b, 1986 ; Woody and Black...Neurosci. Abstr., 12:555, 1986 .) 6. Specific regions of the hypothalamus were identified that when stimulated increased rates of conditioning as...showing such reductions. (Birt, Aou and Woody, Soc. Neurosci. Abstr., 12:555, 1986 .) 11. Studies were concluded of effects of intracellular applications

  8. Neurophysiological Research Supporting the Investigation of Adaptive Network Architectures

    DTIC Science & Technology

    1985-08-14

    Publications Aou, S., Ooinura, Y., Nishino, H., Ono, T., Yamabe, K., Sikdar , S.K., and Noda, T. Functional heterogeneity of single neuronal activity in the...Oomura, Y., Nishino, H., Aou, S., Sikdar , S*K., Hynes, M., Mizuno, Y., and Katafuchi, T. Cholinergic role in monkey dorsolateral prefrontal cortex during

  9. Architectural Adaptability in Parallel Programming via Control Abstraction

    DTIC Science & Technology

    1991-01-01

    Technical Report 359 January 1991 Abstract Parallel programming involves finding the potential parallelism in an application, choos - ing an...during the development of this paper. 34 References [Albert et ai, 1988] Eugene Albert, Kathleen Knobe, Joan D. Lukas, and Guy L. Steele, Jr

  10. Mind and language architecture.

    PubMed

    Logan, Robert K

    2010-07-08

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described.

  11. TROPIX power system architecture

    NASA Astrophysics Data System (ADS)

    Manner, David B.; Hickman, J. Mark

    1995-09-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  12. Etruscan Divination and Architecture

    NASA Astrophysics Data System (ADS)

    Magli, Giulio

    The Etruscan religion was characterized by divination methods, aimed at interpreting the will of the gods. These methods were revealed by the gods themselves and written in the books of the Etrusca Disciplina. The books are lost, but parts of them are preserved in the accounts of later Latin sources. According to such traditions divination was tightly connected with the Etruscan cosmovision of a Pantheon distributed in equally spaced, specific sectors of the celestial realm. We explore here the possible reflections of such issues in the Etruscan architectural remains.

  13. Mind and Language Architecture

    PubMed Central

    Logan, Robert K

    2010-01-01

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described. PMID:20922045

  14. 1993 architectural design awards.

    PubMed

    1993-06-01

    The 10th annual architectural design awards sponsored by Contemporary Long Term Care salute nursing homes and retirement communities that combine a flair for innovative living environments with a sensitivity to the needs of aging residents. These facilities represent the very best in elderly housing that prolongs independence while enhancing efficient operation. The 1993 winners are: King Health Center, U.S. Soldiers' and Airmen's Home, Washington, DC; The Terrace of Los Gatos, Los Gatos, CA; Walker Elder Suites, Edina, MN; The Jefferson, Ballston, VA; The Forum at Rancho San Antonio, Cupertino, CA.

  15. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  16. Architecture, Design, Implementatio

    DTIC Science & Technology

    2003-05-01

    of data on its inputs and produces streams of data on its outputs.” Dean and Cordy [6] present a visual formalism defined as a context- free...cles represent tasks, arrows represent streams. The plus sign is the BNF symbol for “one or more.” 4 1 . . . . . . . . . . . .4 2 4 341 3 15 1 3...guages of Program Design. Reading, MA: Addison- Wesley. [6] T. R. Dean, J. R. Cordy . "A Syntactic Theory of Software Architecture." IEEE Trans. on

  17. Architecture, constraints, and behavior

    PubMed Central

    Doyle, John C.; Csete, Marie

    2011-01-01

    This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems. PMID:21788505

  18. Architecture for robot intelligence

    NASA Technical Reports Server (NTRS)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  19. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, Patrick C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  20. Knowledge-based media adaptation

    NASA Astrophysics Data System (ADS)

    Leopold, Klaus; Jannach, Dietmar; Hellwagner, Hermann

    2004-10-01

    This paper introduces the principal approach and describes the basic architecture and current implementation of the knowledge-based multimedia adaptation framework we are currently developing. The framework can be used in Universal Multimedia Access scenarios, where multimedia content has to be adapted to specific usage environment parameters (network and client device capabilities, user preferences). Using knowledge-based techniques (state-space planning), the framework automatically computes an adaptation plan, i.e., a sequence of media conversion operations, to transform the multimedia resources to meet the client's requirements or constraints. The system takes as input standards-compliant descriptions of the content (using MPEG-7 metadata) and of the target usage environment (using MPEG-21 Digital Item Adaptation metadata) to derive start and goal states for the planning process, respectively. Furthermore, declarative descriptions of the conversion operations (such as available via software library functions) enable existing adaptation algorithms to be invoked without requiring programming effort. A running example in the paper illustrates the descriptors and techniques employed by the knowledge-based media adaptation system.

  1. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  2. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  3. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  4. Modularity and mental architecture.

    PubMed

    Robbins, Philip

    2013-11-01

    Debates about the modularity of cognitive architecture have been ongoing for at least the past three decades, since the publication of Fodor's landmark book The Modularity of Mind. According to Fodor, modularity is essentially tied to informational encapsulation, and as such is only found in the relatively low-level cognitive systems responsible for perception and language. According to Fodor's critics in the evolutionary psychology camp, modularity simply reflects the fine-grained functional specialization dictated by natural selection, and it characterizes virtually all aspects of cognitive architecture, including high-level systems for judgment, decision making, and reasoning. Though both of these perspectives on modularity have garnered support, the current state of evidence and argument suggests that a broader skepticism about modularity may be warranted. WIREs Cogn Sci 2013, 4:641-649. doi: 10.1002/wcs.1255 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  5. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  6. Evolving Neural Network Architecture

    DTIC Science & Technology

    1993-03-01

    associated with individual ADALINES . If better re-suits are obtained, then the new weight values ale kept; otherwise, the new weights are ignored. If...the training process exhausts trials involving a single ADALINE , pairwise (or higher) adaptations are attempted. The 3-bit parity problem has been

  7. Demand Activated Manufacturing Architecture

    SciTech Connect

    Bender, T.R.; Zimmerman, J.J.

    2001-02-07

    Honeywell Federal Manufacturing & Technologies (FM&T) engineers John Zimmerman and Tom Bender directed separate projects within this CRADA. This Project Accomplishments Summary contains their reports independently. Zimmerman: In 1998 Honeywell FM&T partnered with the Demand Activated Manufacturing Architecture (DAMA) Cooperative Business Management Program to pilot the Supply Chain Integration Planning Prototype (SCIP). At the time, FM&T was developing an enterprise-wide supply chain management prototype called the Integrated Programmatic Scheduling System (IPSS) to improve the DOE's Nuclear Weapons Complex (NWC) supply chain. In the CRADA partnership, FM&T provided the IPSS technical and business infrastructure as a test bed for SCIP technology, and this would provide FM&T the opportunity to evaluate SCIP as the central schedule engine and decision support tool for IPSS. FM&T agreed to do the bulk of the work for piloting SCIP. In support of that aim, DAMA needed specific DOE Defense Programs opportunities to prove the value of its supply chain architecture and tools. In this partnership, FM&T teamed with Sandia National Labs (SNL), Division 6534, the other DAMA partner and developer of SCIP. FM&T tested SCIP in 1998 and 1999. Testing ended in 1999 when DAMA CRADA funding for FM&T ceased. Before entering the partnership, FM&T discovered that the DAMA SCIP technology had an array of applications in strategic, tactical, and operational planning and scheduling. At the time, FM&T planned to improve its supply chain performance by modernizing the NWC-wide planning and scheduling business processes and tools. The modernization took the form of a distributed client-server planning and scheduling system (IPSS) for planners and schedulers to use throughout the NWC on desktops through an off-the-shelf WEB browser. The planning and scheduling process within the NWC then, and today, is a labor-intensive paper-based method that plans and schedules more than 8,000 shipped parts

  8. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  9. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  10. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  11. Adaptive Attitude Control of the Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Muse, Jonathan

    2010-01-01

    An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.

  12. 9. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) Annotated lithograph on paper. Standard plan used for construction of Commissary Sergeants Quarters, 1876. PLAN, FRONT AND SIDE ELEVATIONS, SECTION - Fort Myer, Commissary Sergeant's Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  13. Parameter Estimation for a Hybrid Adaptive Flight Controller

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje

    2009-01-01

    This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.

  14. Improving nonlinear modeling capabilities of functional link adaptive filters.

    PubMed

    Comminiello, Danilo; Scarpiniti, Michele; Scardapane, Simone; Parisi, Raffaele; Uncini, Aurelio

    2015-09-01

    The functional link adaptive filter (FLAF) represents an effective solution for online nonlinear modeling problems. In this paper, we take into account a FLAF-based architecture, which separates the adaptation of linear and nonlinear elements, and we focus on the nonlinear branch to improve the modeling performance. In particular, we propose a new model that involves an adaptive combination of filters downstream of the nonlinear expansion. Such combination leads to a cooperative behavior of the whole architecture, thus yielding a performance improvement, particularly in the presence of strong nonlinearities. An advanced architecture is also proposed involving the adaptive combination of multiple filters on the nonlinear branch. The proposed models are assessed in different nonlinear modeling problems, in which their effectiveness and capabilities are shown.

  15. The Architecture of Exoplanets

    NASA Astrophysics Data System (ADS)

    Hatzes, Artie P.

    2016-12-01

    Prior to the discovery of exoplanets our expectations of their architecture were largely driven by the properties of our solar system. We expected giant planets to lie in the outer regions and rocky planets in the inner regions. Planets should probably only occupy orbital distances 0.3-30 AU from the star. Planetary orbits should be circular, prograde and in the same plane. The reality of exoplanets have shattered these expectations. Jupiter-mass, Neptune-mass, Superearths, and even Earth-mass planets can orbit within 0.05 AU of the stars, sometimes with orbital periods of less than one day. Exoplanetary orbits can be eccentric, misaligned, and even in retrograde orbits. Radial velocity surveys gave the first hints that the occurrence rate increases with decreasing mass. This was put on a firm statistical basis with the Kepler mission that clearly demonstrated that there were more Neptune- and Superearth-sized planets than Jupiter-sized planets. These are often in multiple, densely packed systems where the planets all orbit within 0.3 AU of the star, a result also suggested by radial velocity surveys. Exoplanets also exhibit diversity along the main sequence. Massive stars tend to have a higher frequency of planets (≈ 20-25 %) that tend to be more massive (M≈ 5-10 M_{Jup}). Giant planets around low mass stars are rare, but these stars show an abundance of small (Neptune and Superearth) planets in multiple systems. Planet formation is also not restricted to single stars as the Kepler mission has discovered several circumbinary planets. Although we have learned much about the architecture of planets over the past 20 years, we know little about the census of small planets at relatively large (a>1 AU) orbital distances. We have yet to find a planetary system that is analogous to our own solar system. The question of how unique are the properties of our own solar system remains unanswered. Advancements in the detection methods of small planets over a wide range of

  16. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  17. FRACSAT: Automated design synthesis for future space architectures

    NASA Astrophysics Data System (ADS)

    Mackey, R.; Uckun, S.; Do, Minh; Shah, J.

    This paper describes the algorithmic basis and development of FRACSAT (FRACtionated Spacecraft Architecture Toolkit), a new approach to conceptual design, cost-benefit analysis, and detailed trade studies for space systems. It provides an automated capability for exploration of candidate spacecraft architectures, leading users to near-optimal solutions with respect to user-defined requirements, risks, and program uncertainties. FRACSAT utilizes a sophisticated planning algorithm (PlanVisioner) to perform a quasi-exhaustive search for candidate architectures, constructing candidates from an extensible model-based representation of space system components and functions. These candidates are then evaluated with emphasis on the business case, computing the expected design utility and system costs as well as risk, presenting the user with a greatly reduced selection of candidates. The user may further refine the search according to cost or benefit uncertainty, adaptability, or other performance metrics as needed.

  18. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  19. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  20. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  1. Naval open systems architecture

    NASA Astrophysics Data System (ADS)

    Guertin, Nick; Womble, Brian; Haskell, Virginia

    2013-05-01

    For the past 8 years, the Navy has been working on transforming the acquisition practices of the Navy and Marine Corps toward Open Systems Architectures to open up our business, gain competitive advantage, improve warfighter performance, speed innovation to the fleet and deliver superior capability to the warfighter within a shrinking budget1. Why should Industry care? They should care because we in Government want the best Industry has to offer. Industry is in the business of pushing technology to greater and greater capabilities through innovation. Examples of innovations are on full display at this conference, such as exploring the impact of difficult environmental conditions on technical performance. Industry is creating the tools which will continue to give the Navy and Marine Corps important tactical advantages over our adversaries.

  2. Power Systems Control Architecture

    SciTech Connect

    James Davidson

    2005-01-01

    A diagram provided in the report depicts the complexity of the power systems control architecture used by the national power structure. It shows the structural hierarchy and the relationship of the each system to those other systems interconnected to it. Each of these levels provides a different focus for vulnerability testing and has its own weaknesses. In evaluating each level, of prime concern is what vulnerabilities exist that provide a path into the system, either to cause the system to malfunction or to take control of a field device. An additional vulnerability to consider is can the system be compromised in such a manner that the attacker can obtain critical information about the system and the portion of the national power structure that it controls.

  3. Full-Scale Flight Research Testbeds: Adaptive and Intelligent Control

    NASA Technical Reports Server (NTRS)

    Pahle, Joe W.

    2008-01-01

    This viewgraph presentation describes the adaptive and intelligent control methods used for aircraft survival. The contents include: 1) Motivation for Adaptive Control; 2) Integrated Resilient Aircraft Control Project; 3) Full-scale Flight Assets in Use for IRAC; 4) NASA NF-15B Tail Number 837; 5) Gen II Direct Adaptive Control Architecture; 6) Limited Authority System; and 7) 837 Flight Experiments. A simulated destabilization failure analysis along with experience and lessons learned are also presented.

  4. Temperature-adaptive Circuits on Reconfigurable Analog Arrays

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo S.; Keymeulen, Didier; Ramesham, Rajeshuni; Neff, Joseph; Katkoori, Srinivas

    2006-01-01

    This paper describes a new reconfigurable analog array (MA) architecture and integrated circuit (IC) used to map analog circuits that can adapt to extreme temperatures under programmable control. Algorithm-driven adaptation takes place on the RAA IC. The algorithms are implemented in a separate Field Programmable Gate Array (FPGA) IC, co-located with the RAA in the extreme temperature environment. The experiments demonstrate circuit adaptation over a wide temperature range, from extremely low temperature of -180 C to high 120 C.

  5. Secure Storage Architectures

    SciTech Connect

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine; Koch, Scott M; Naughton, III, Thomas J; Pogge, James R; Scott, Stephen L; Shipman, Galen M; Sorrillo, Lawrence

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  6. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  7. An Architectural Strategy for Change.

    ERIC Educational Resources Information Center

    Holt, Raymond M., Ed.

    This volume presents the proceedings of the preconference institute of the Architecture for Public Libraries Committee of Library Administration Division's Building and Equipment section. The keynote address raises questions about architecture in a strategy for change. The remaining 14 articles and presentations are divided into five sections:…

  8. Interior Design in Architectural Education

    ERIC Educational Resources Information Center

    Gurel, Meltem O.; Potthoff, Joy K.

    2006-01-01

    The domain of interiors constitutes a point of tension between practicing architects and interior designers. Design of interior spaces is a significant part of architectural profession. Yet, to what extent does architectural education keep pace with changing demands in rendering topics that are identified as pertinent to the design of interiors?…

  9. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  10. A neuro-fuzzy architecture for real-time applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song

    1992-01-01

    Neural networks and fuzzy expert systems perform the same task of functional mapping using entirely different approaches. Each approach has certain unique features. The ability to learn specific input-output mappings from large input/output data possibly corrupted by noise and the ability to adapt or continue learning are some important features of neural networks. Fuzzy expert systems are known for their ability to deal with fuzzy information and incomplete/imprecise data in a structured, logical way. Since both of these techniques implement the same task (that of functional mapping--we regard 'inferencing' as one specific category under this class), a fusion of the two concepts that retains their unique features while overcoming their individual drawbacks will have excellent applications in the real world. In this paper, we arrive at a new architecture by fusing the two concepts. The architecture has the trainability/adaptibility (based on input/output observations) property of the neural networks and the architectural features that are unique to fuzzy expert systems. It also does not require specific information such as fuzzy rules, defuzzification procedure used, etc., though any such information can be integrated into the architecture. We show that this architecture can provide better performance than is possible from a single two or three layer feedforward neural network. Further, we show that this new architecture can be used as an efficient vehicle for hardware implementation of complex fuzzy expert systems for real-time applications. A numerical example is provided to show the potential of this approach.

  11. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    PubMed Central

    2014-01-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor (RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires. PMID:25050088

  12. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties.

    PubMed

    Gerngross, Mark-Daniel; Carstensen, Jürgen; Föll, Helmut

    2014-01-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor (RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires.

  13. Investigation of the variability of NIR in-line monitoring of roller compaction process by using Fast Fourier Transform (FFT) analysis.

    PubMed

    Feng, Tao; Wang, Feng; Pinal, Rodolfo; Wassgren, Carl; Carvajal, M Teresa

    2008-01-01

    The purpose of this research was to investigate the variability of the roller compaction process while monitoring in-line with near-infrared (NIR) spectroscopy. In this paper, a pragmatic method in determining this variability of in-line NIR monitoring roller compaction process was developed and the variability limits were established. Fast Fourier Transform (FFT) analysis was used to study the source of the systematic fluctuations of the NIR spectra. An off-line variability analysis method was developed as well to simulate the in-line monitoring process in order to determine the variability limits of the roller compaction process. For this study, a binary formulation was prepared composed of acetaminophen and microcrystalline cellulose. Different roller compaction parameters such as roll speed and feeding rates were investigated to understand the variability of the process. The best-fit line slope of NIR spectra exhibited frequency dependence only on the roll speed regardless of the feeding rates. The eccentricity of the rolling motion of rollers was identified as the major source of variability and correlated with the fluctuations of the slopes of NIR spectra. The off-line static and dynamic analyses of the compacts defined two different variability of the roller compaction; the variability limits were established. These findings were proved critical in the optimization of the experimental setup of the roller compaction process by minimizing the variability of NIR in-line monitoring.

  14. A new Methimazole sensor based on nanocomposite of CdS NPs-RGO/IL-carbon paste electrode using differential FFT continuous linear sweep voltammetry.

    PubMed

    Norouzi, Parviz; Gupta, Vinod Kumar; Larijani, Bagher; Ganjali, Mohammad Reza; Faridbod, Farnoush

    2014-09-01

    A Methimazole sensor was designed and constructed based on nanocomposite of carbon, ionic liquid, reduced graphene oxide (RGO) and CdS nanoparticles. The sensor signal was obtained by Differential FFT continuous linear sweep voltammetry (DFFTCLSV) technique. The potential waveform contains two sections, preconcentration potential and potential ramp. In this detection technique, after subtracting the background current from noise, the electrode response was calculated, based on partial and total charge exchanges at the electrode surface. The combination of RGO and CdS nanoparticles can catalyze the electron transfer, which outcomes of the amplification of the sensor signal. The result showed that the sensor response was proportional to the concentrations of Methimazole in the range of 2.0 to 300 nM, with a detection limit of 5.5×10(-10) M. The sensor showed good reproducibility, long-term of usage stability and accuracy. The characterization of the sensor surface was studied by atomic force Microscopy and Electrochemical Impedance Spectroscopy. Moreover, the proposed sensor exhibited good accuracy, and R.S.D value of 2.82%, and the response time of less than 7 s.

  15. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    NASA Astrophysics Data System (ADS)

    Gerngross, Mark-Daniel; Carstensen, Jürgen; Föll, Helmut

    2014-06-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor ( RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires.

  16. Mission Architecture Comparison for Human Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Geffre, Jim; Robertson, Ed; Lenius, Jon

    2006-01-01

    The Vision for Space Exploration outlines a bold new national space exploration policy that holds as one of its primary objectives the extension of human presence outward into the Solar System, starting with a return to the Moon in preparation for the future exploration of Mars and beyond. The National Aeronautics and Space Administration is currently engaged in several preliminary analysis efforts in order to develop the requirements necessary for implementing this objective in a manner that is both sustainable and affordable. Such analyses investigate various operational concepts, or mission architectures , by which humans can best travel to the lunar surface, live and work there for increasing lengths of time, and then return to Earth. This paper reports on a trade study conducted in support of NASA s Exploration Systems Mission Directorate investigating the relative merits of three alternative lunar mission architecture strategies. The three architectures use for reference a lunar exploration campaign consisting of multiple 90-day expeditions to the Moon s polar regions, a strategy which was selected for its high perceived scientific and operational value. The first architecture discussed incorporates the lunar orbit rendezvous approach employed by the Apollo lunar exploration program. This concept has been adapted from Apollo to meet the particular demands of a long-stay polar exploration campaign while assuring the safe return of crew to Earth. Lunar orbit rendezvous is also used as the baseline against which the other alternate concepts are measured. The first such alternative, libration point rendezvous, utilizes the unique characteristics of the cislunar libration point instead of a low altitude lunar parking orbit as a rendezvous and staging node. Finally, a mission strategy which does not incorporate rendezvous after the crew ascends from the Moon is also studied. In this mission strategy, the crew returns directly to Earth from the lunar surface, and is

  17. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    NASA Technical Reports Server (NTRS)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  18. Adaptation of adaptive optics systems.

    NASA Astrophysics Data System (ADS)

    Xin, Yu; Zhao, Dazun; Li, Chen

    1997-10-01

    In the paper, a concept of an adaptation of adaptive optical system (AAOS) is proposed. The AAOS has certain real time optimization ability against the variation of the brightness of detected objects m, atmospheric coherence length rO and atmospheric time constant τ by means of changing subaperture number and diameter, dynamic range, and system's temporal response. The necessity of AAOS using a Hartmann-Shack wavefront sensor and some technical approaches are discussed. Scheme and simulation of an AAOS with variable subaperture ability by use of both hardware and software are presented as an example of the system.

  19. Architecture for Cognitive Networking within NASAs Future Space Communications Infrastructure

    NASA Technical Reports Server (NTRS)

    Clark, Gilbert J., III; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David

    2016-01-01

    Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, MEO, GEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes architecture features of cognitive networking within the future NASA space communications infrastructure, and interacting with the legacy systems and infrastructure in the meantime. The paper begins by discussing the need for increased automation, including inter-system collaboration. This discussion motivates the features of an architecture including cognitive networking for future missions and relays, interoperating with both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture as a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.

  20. A novel peak detection approach with chemical noise removal using short-time FFT for prOTOF MS data.

    PubMed

    Zhang, Shuqin; Wang, Honghui; Zhou, Xiaobo; Hoehn, Gerard T; DeGraba, Thomas J; Gonzales, Denise A; Suffredini, Anthony F; Ching, Wai-Ki; Ng, Michael K; Wong, Stephen T C

    2009-08-01

    Peak detection is a pivotal first step in biomarker discovery from MS data and can significantly influence the results of downstream data analysis steps. We developed a novel automatic peak detection method for prOTOF MS data, which does not require a priori knowledge of protein masses. Random noise is removed by an undecimated wavelet transform and chemical noise is attenuated by an adaptive short-time discrete Fourier transform. Isotopic peaks corresponding to a single protein are combined by extracting an envelope over them. Depending on the S/N, the desired peaks in each individual spectrum are detected and those with the highest intensity among their peak clusters are recorded. The common peaks among all the spectra are identified by choosing an appropriate cut-off threshold in the complete linkage hierarchical clustering. To remove the 1 Da shifting of the peaks, the peak corresponding to the same protein is determined as the detected peak with the largest number among its neighborhood. We validated this method using a data set of serial peptide and protein calibration standards. Compared with MoverZ program, our new method detects more peaks and significantly enhances S/N of the peak after the chemical noise removal. We then successfully applied this method to a data set from prOTOF MS spectra of albumin and albumin-bound proteins from serum samples of 59 patients with carotid artery disease compared to vascular disease-free patients to detect peaks with S/N> or =2. Our method is easily implemented and is highly effective to define peaks that will be used for disease classification or to highlight potential biomarkers.