Science.gov

Sample records for adaptive fft architecture

  1. A High-Throughput, Adaptive FFT Architecture for FPGA-Based Space-Borne Data Processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Kayla; Zheng, Jason; He, Yutao; Shah, Biren

    2010-01-01

    Historically, computationally-intensive data processing for space-borne instruments has heavily relied on ground-based computing resources. But with recent advances in functional densities of Field-Programmable Gate-Arrays (FPGAs), there has been an increasing desire to shift more processing on-board; therefore relaxing the downlink data bandwidth requirements. Fast Fourier Transforms (FFTs) are commonly used building blocks for data processing applications, with a growing need to increase the FFT block size. Many existing FFT architectures have mainly emphasized on low power consumption or resource usage; but as the block size of the FFT grows, the throughput is often compromised first. In addition to power and resource constraints, space-borne digital systems are also limited to a small set of space-qualified memory elements, which typically lag behind the commercially available counterparts in capacity and bandwidth. The bandwidth limitation of the external memory creates a bottleneck for a large, high-throughput FFT design with large block size. In this paper, we present the Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture for a moderately large block size (32K) with considerations to power consumption and resource usage, as well as throughput. We will also show that the architecture can be easily adapted for different FFT block sizes with different throughput and power requirements. The result is completely contained within an FPGA without relying on external memories. Implementation results are summarized.

  2. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the

  3. FFT Computation with Systolic Arrays, A New Architecture

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.

  4. Modular architecture for high performance implementation of the FFT algorithm

    SciTech Connect

    Spiecha, K. ); Jarocki, R. )

    1990-12-01

    This paper presents a new VLSI-oriented architecture to compute discrete Fourier transform. It consists of a homogeneous structure of processing elements. The structure has a performance equal to 1/{ital t} transforms per second, where {ital t} is the time needed for the execution of a single butterfly computation or the time needed for the collection of a complete vector of samples, whichever occurs to be longer. Although the system is not optimal (it achieves {ital O(N}{sup 3} log{sup 4} {ital N)} area time{sup 2} performance), the architecture is modular and makes it possible to design a system which performs FFT of any size without any extra circuitry. Moreover, the system can provide a built-in self-test and self-restructuring. The system consists of only one type of integrated circuit, its structure being irrespective of the transform size, which considerably reduces the cost of implementation.

  5. Adaptive Pre-FFT Equalizer with High-Precision Channel Estimator for ISI Channels

    NASA Astrophysics Data System (ADS)

    Yoshida, Makoto

    We present an attractive approach for OFDM transmission using an adaptive pre-FFT equalizer, which can select ICI reduction mode according to channel condition, and a degenerated-inverse-matrix-based channel estimator (DIME), which uses a cyclic sinc-function matrix uniquely determined by transmitted subcarriers. In addition to simulation results, the proposed system with an adaptive pre-FFT equalizer and DIME has been laboratory tested by using a software defined radio (SDR)-based test bed. The simulation and experimental results demonstrated that the system at a rate of more than 100Mbps can provide a bit error rate of less than 10-3 for a fast multi-path fading channel that has a moving velocity of more than 200km/h with a delay spread of 1.9µs (a maximum delay path of 7.3µs) in the 5-GHz band.

  6. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  7. High-resolution optical coherence tomography using self-adaptive FFT and array detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yonghua; Chen, Zhongping; Xiang, Shaohua; Ding, Zhihua; Ren, Hongwu; Nelson, J. Stuart; Ranka, Jinendra K.; Windeler, Robert S.; Stentz, Andrew J.

    2001-05-01

    We developed a novel optical coherence tomographic (OCT) system which utilized broadband continuum generation for high axial resolution and a high numeric-aperture (N.A.) Objective for high lateral resolution (<5 micrometers ). The optimal focusing point was dynamically compensated during axial scanning so that it can be kept at the same position as the point that has an equal optical path length as that in the reference arm. This gives us uniform focusing size (<5 mum) at different depths. A new self-adaptive fast Fourier transform (FFT) algorithm was developed to digitally demodulate the interference fringes. The system employed a four-channel detector array for speckle reduction that significantly improved the image's signal-to-noise ratio.

  8. Architecture for Adaptive Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Hayes-Roth, Barbara

    1993-01-01

    We identify a class of niches to be occupied by 'adaptive intelligent systems (AISs)'. In contrast with niches occupied by typical AI agents, AIS niches present situations that vary dynamically along several key dimensions: different combinations of required tasks, different configurations of available resources, contextual conditions ranging from benign to stressful, and different performance criteria. We present a small class hierarchy of AIS niches that exhibit these dimensions of variability and describe a particular AIS niche, ICU (intensive care unit) patient monitoring, which we use for illustration throughout the paper. We have designed and implemented an agent architecture that supports all of different kinds of adaptation by exploiting a single underlying theoretical concept: An agent dynamically constructs explicit control plans to guide its choices among situation-triggered behaviors. We illustrate the architecture and its support for adaptation with examples from Guardian, an experimental agent for ICU monitoring.

  9. Adaptive reconfigurable distributed sensor architecture

    NASA Astrophysics Data System (ADS)

    Akey, Mark L.

    1997-07-01

    The infancy of unattended ground based sensors is quickly coming to an end with the arrival of on-board GPS, networking, and multiple sensing capabilities. Unfortunately, their use is only first-order at best: GPS assists with sensor report registration; networks push sensor reports back to the warfighter and forwards control information to the sensors; multispectral sensing is a preset, pre-deployment consideration; and the scalability of large sensor networks is questionable. Current architectures provide little synergy among or within the sensors either before or after deployment, and do not map well to the tactical user's organizational structures and constraints. A new distributed sensor architecture is defined which moves well beyond single sensor, single task architectures. Advantages include: (1) automatic mapping of tactical direction to multiple sensors' tasks; (2) decentralized, distributed management of sensor resources and tasks; (3) software reconfiguration of deployed sensors; (4) network scalability and flexibility to meet the constraints of tactical deployments, and traditional combat organizations and hierarchies; and (5) adaptability to new battlefield communication paradigms such as BADD (Battlefield Analysis and Data Dissemination). The architecture is supported in two areas: a recursive, structural definition of resource configuration and management via loose associations; and a hybridization of intelligent software agents with tele- programming capabilities. The distributed sensor architecture is examined within the context of air-deployed ground sensors with acoustic, communication direction finding, and infra-red capabilities. Advantages and disadvantages of the architecture are examined. Consideration is given to extended sensor life (up to 6 months), post-deployment sensor reconfiguration, limited on- board sensor resources (processor and memory), and bandwidth. It is shown that technical tasking of the sensor suite can be automatically

  10. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  11. Communication Efficient Multi-processor FFT

    NASA Astrophysics Data System (ADS)

    Lennart Johnsson, S.; Jacquemin, Michel; Krawitz, Robert L.

    1992-10-01

    Computing the fast Fourier transform on a distributed memory architecture by a direct pipelined radix-2, a bi-section, or a multisection algorithm, all yield the same communications requirement, if communication for all FFT stages can be performed concurrently, the input data is in normal order, and the data allocation is consecutive. With a cyclic data allocation, or bit-reversed input data and a consecutive allocation, multi-sectioning offers a reduced communications requirement by approximately a factor of two. For a consecutive data allocation, normal input order, a decimation-in-time FFT requires that P/ N + d-2 twiddle factors be stored for P elements distributed evenly over N processors, and the axis that is subject to transformation be distributed over 2 d processors. No communication of twiddle factors is required. The same storage requirements hold for a decimation-in-frequency FFT, bit-reversed input order, and consecutive data allocation. The opposite combination of FFT type and data ordering requires a factor of log 2N more storage for N processors. The peak performance for a Connection Machine system CM-200 implementation is 12.9 Gflops/s in 32-bit precision, and 10.7 Gflops/s in 64-bit precision for unordered transforms local to each processor. The corresponding execution rates for ordered transforms are 11.1 Gflops/s and 8.5 Gflops/s, respectively. For distributed one- and two-dimensional transforms the peak performance for unordered transforms exceeds 5 Gflops/s in 32-bit precision and 3 Gflops/s in 64-bit precision. Three-dimensional transforms execute at a slightly lower rate. Distributed ordered transforms execute at a rate of about {1}/{2}to {2}/{3} of the unordered transforms.

  12. The genetic architecture of climatic adaptation of tropical cattle.

    PubMed

    Porto-Neto, Laercio R; Reverter, Antonio; Prayaga, Kishore C; Chan, Eva K F; Johnston, David J; Hawken, Rachel J; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S; Bolormaa, Sunduimijid; Goddard, Michael E; Burrow, Heather M; Henshall, John M; Lehnert, Sigrid A; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity.

  13. The genetic architecture of climatic adaptation of tropical cattle.

    PubMed

    Porto-Neto, Laercio R; Reverter, Antonio; Prayaga, Kishore C; Chan, Eva K F; Johnston, David J; Hawken, Rachel J; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S; Bolormaa, Sunduimijid; Goddard, Michael E; Burrow, Heather M; Henshall, John M; Lehnert, Sigrid A; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity. PMID:25419663

  14. The Genetic Architecture of Climatic Adaptation of Tropical Cattle

    PubMed Central

    Porto-Neto, Laercio R.; Reverter, Antonio; Prayaga, Kishore C.; Chan, Eva K. F.; Johnston, David J.; Hawken, Rachel J.; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S.; Bolormaa, Sunduimijid; Goddard, Michael E.; Burrow, Heather M.; Henshall, John M.; Lehnert, Sigrid A.; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity. PMID:25419663

  15. L1 adaptive output-feedback control architectures

    NASA Astrophysics Data System (ADS)

    Kharisov, Evgeny

    This research focuses on development of L 1 adaptive output-feedback control. The objective is to extend the L1 adaptive control framework to a wider class of systems, as well as obtain architectures that afford more straightforward tuning. We start by considering an existing L1 adaptive output-feedback controller for non-strictly positive real systems based on piecewise constant adaptation law. It is shown that L 1 adaptive control architectures achieve decoupling of adaptation from control, which leads to bounded away from zero time-delay and gain margins in the presence of arbitrarily fast adaptation. Computed performance bounds provide quantifiable performance guarantees both for system output and control signal in transient and steady state. A noticeable feature of the L1 adaptive controller is that its output behavior can be made close to the behavior of a linear time-invariant system. In particular, proper design of the lowpass filter can achieve output response, which almost scales for different step reference commands. This property is relevant to applications with human operator in the loop (for example: control augmentation systems of piloted aircraft), since predictability of the system response is necessary for adequate performance of the operator. Next we present applications of the L1 adaptive output-feedback controller in two different fields of engineering: feedback control of human anesthesia, and ascent control of a NASA crew launch vehicle (CLV). The purpose of the feedback controller for anesthesia is to ensure that the patient's level of sedation during surgery follows a prespecified profile. The L1 controller is enabled by anesthesiologist after he/she achieves sufficient patient sedation level by introducing sedatives manually. This problem formulation requires safe switching mechanism, which avoids controller initialization transients. For this purpose, we used an L1 adaptive controller with special output predictor initialization routine

  16. Future Service Adaptive Access/Aggregation Network Architecture

    NASA Astrophysics Data System (ADS)

    Ikeda, Hiroki; Takeshita, Hidetoshi; Okamoto, Satoru

    The emergence of new services in the cloud computing era has made smooth service migration an important issue in access networks. However, different types of equipment are typically used for the different services due to differences in service requirements. This leads to an increase in not only capital expenditures but also operational expenditures. Here we propose using a service adaptive approach as a solution to this problem. We analyze the requirements of a future access network in terms of service, network, and node. We discuss available access network technologies including the passive optical network, single star network. Finally, we present a future service adaptive access/aggregation network and its architecture along with a programmable optical line terminal and optical network unit, discuss its benefit, and describe example services that it would support.

  17. Adaptive resource allocation architecture applied to line tracking

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Pace, Donald W.

    2000-04-01

    Recent research has demonstrated the benefits of a multiple hypothesis, multiple model sonar line tracking solution, achieved at significant computational cost. We have developed an adaptive architecture that trades computational resources for algorithm complexity based on environmental conditions. A Fuzzy Logic Rule-Based approach is applied to adaptively assign algorithmic resources to meet system requirements. The resources allocated by the Fuzzy Logic algorithm include (1) the number of hypotheses permitted (yielding multi-hypothesis and single-hypothesis modes), (2) the number of signal models to use (yielding an interacting multiple model capability), (3) a new track likelihood for hypothesis generation, (4) track attribute evaluator activation (for signal to noise ratio, frequency bandwidth, and others), and (5) adaptive cluster threshold control. Algorithm allocation is driven by a comparison of current throughput rates to a desired real time rate. The Fuzzy Logic Controlled (FLC) line tracker, a single hypothesis line tracker, and a multiple hypothesis line tracker are compared on real sonar data. System resource usage results demonstrate the utility of the FLC line tracker.

  18. FFT and cone-beam CT reconstruction on graphics hardware

    NASA Astrophysics Data System (ADS)

    Després, Philippe; Sun, Mingshan; Hasegawa, Bruce H.; Prevrhal, Sven

    2007-03-01

    Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines. Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented on the GPU and its performance compared to a CPU version. Different reconstruction strategies were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to 30, allowing 256 3 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical imaging.

  19. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  20. Maize canopy architecture and adaptation to high plant density in long term selection programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grain yield since the 1930s has increased more than five-fold in large part due to improvements in adaptation to high plant density. Changes to plant architecture that associated with improved light interception have made a major contribution to improved adaptation to high plant density. Improved ...

  1. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions.

    PubMed

    Sandhu, Nitika; Raman, K Anitha; Torres, Rolando O; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-08-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity. PMID:27342311

  2. Efficient Two-Dimensional-FFT Program

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1992-01-01

    Program computes 64 X 64-point fast Fourier transform in less than 17 microseconds. Optimized 64 X 64 Point Two-Dimensional Fast Fourier Transform combines performance of real- and complex-valued one-dimensional fast Fourier transforms (FFT's) to execute two-dimensional FFT and coefficients of power spectrum. Coefficients used in many applications, including analyzing spectra, convolution, digital filtering, processing images, and compressing data. Source code written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly languages.

  3. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  4. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  5. An integrated architecture of adaptive neural network control for dynamic systems

    SciTech Connect

    Ke, Liu; Tokar, R.; Mcvey, B.

    1994-07-01

    In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.

  6. DC-offset effect cancelation method using mean-padding FFT for automotive UWB radar sensor

    NASA Astrophysics Data System (ADS)

    Ju, Yeonghwan; Kim, Sang-Dong; Lee, Jong-Hun

    2011-06-01

    To improve road safety and realize intelligent transportation, Ultra-Wideband (UWB) radars sensor in the 24 GHz domain are currently under development for many automotive applications. Automotive UWB radar sensor must be small, require low power and inexpensive. By employing a direct conversion receiver, automotive UWB radar sensor is able to meet size and cost reduction requirements. We developed Automotive UWB radar sensor for automotive applications. The developed receiver of the automotive radar sensor is direct conversion architecture. Direct conversion architecture poses a dc-offset problem. In automotive UWB radar, Doppler frequency is used to extract velocity. The Doppler frequency of a vehicle can be detected using zero-padding Fast Fourier Transform (FFT). However, a zero-padding FFT error is occurs due to DC-offset problem in automotive UWB radar sensor using a direct conversion receiver. Therefore, dc-offset problem corrupts velocity ambiguity. In this paper we proposed a mean-padding method to reduce zero-padding FFT error due to DC-offset in automotive UWB radar using direct conversion receiver, and verify our proposed method with computer simulation and experiment using developed automotive UWB radar sensor. We present the simulation results and experiment result to compare velocity measurement probability of the zero-padding FFT and the mean-padding FFT. The proposed algorithm simulated using Matlab and experimented using designed the automotive UWB radar sensor in a real road environment. The proposed method improved velocity measurement probability.

  7. Adaptive kinetic-fluid solvers for heterogeneous computing architectures

    NASA Astrophysics Data System (ADS)

    Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir

    2015-12-01

    We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.

  8. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    SciTech Connect

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  9. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  10. CZT vs FFT: Flexibility vs Speed

    SciTech Connect

    S. Sirin

    2003-10-01

    Bluestein's Fast Fourier Transform (FFT), commonly called the Chirp-Z Transform (CZT), is a little-known algorithm that offers engineers a high-resolution FFT combined with the ability to specify bandwidth. In the field of digital signal processing, engineers are always challenged to detect tones, frequencies, signatures, or some telltale sign that signifies a condition that must be indicated, ignored, or controlled. One of these challenges is to detect specific frequencies, for instance when looking for tones from telephones or detecting 60-Hz noise on power lines. The Goertzel algorithm described in Embedded Systems Programming, September 2002, offered a powerful tool toward finding specific frequencies faster than the FFT.Another challenge involves analyzing a range of frequencies, such as recording frequency response measurements, matching voice patterns, or displaying spectrum information on the face of an amateur radio. To meet this challenge most engineers use the well-known FFT. The CZT gives the engineer the flexibility to specify bandwidth and outputs real and imaginary frequency components from which the magnitude and phase can be computed. A description of the CZT and a discussion of the advantages and disadvantages of CZT versus the FFT and Goertzel algorithms will be followed by situations in which the CZT would shine. The reader will find that the CZT is very useful but that flexibility has a price.

  11. Control architecture for an adaptive electronically steerable flash lidar and associated instruments

    NASA Astrophysics Data System (ADS)

    Ruppert, Lyle; Craner, Jeremy; Harris, Timothy

    2014-09-01

    An Electronically Steerable Flash Lidar (ESFL), developed by Ball Aerospace & Technologies Corporation, allows realtime adaptive control of configuration and data-collection strategy based on recent or concurrent observations and changing situations. This paper reviews, at a high level, some of the algorithms and control architecture built into ESFL. Using ESFL as an example, it also discusses the merits and utility such adaptable instruments in Earth-system studies.

  12. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  13. Adaptation of pancreatic islet cyto-architecture during development

    NASA Astrophysics Data System (ADS)

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2016-04-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1-35 weeks and 12-24 months.

  14. Adaptation of pancreatic islet cyto-architecture during development

    NASA Astrophysics Data System (ADS)

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2016-04-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1–35 weeks and 12–24 months.

  15. Adaptive changes in the kinetochore architecture facilitate proper spindle assembly

    PubMed Central

    Magidson, Valentin; Paul, Raja; Yang, Nachen; Ault, Jeffrey G.; O’Connell, Christopher B.; Tikhonenko, Irina; McEwen, Bruce F.; Mogilner, Alex; Khodjakov, Alexey

    2015-01-01

    Mitotic spindle formation relies on the stochastic capture of microtubules at kinetochores. Kinetochore architecture affects the efficiency and fidelity of this process with large kinetochores expected to accelerate assembly at the expense of accuracy, and smaller kinetochores to suppress errors at the expense of efficiency. We demonstrate that upon mitotic entry, kinetochores in cultured human cells form large crescents that subsequently compact into discrete structures on opposite sides of the centromere. This compaction occurs only after the formation of end-on microtubule attachments. Live-cell microscopy reveals that centromere rotation mediated by lateral kinetochore-microtubule interactions precedes formation of end-on attachments and kinetochore compaction. Computational analyses of kinetochore expansion-compaction in the context of lateral interactions correctly predict experimentally-observed spindle assembly times with reasonable error rates. The computational model suggests that larger kinetochores reduce both errors and assembly times, which can explain the robustness of spindle assembly and the functional significance of enlarged kinetochores. PMID:26258631

  16. Dimensions of Usability: Cougaar, Aglets and Adaptive Agent Architecture (AAA)

    SciTech Connect

    Haack, Jereme N.; Cowell, Andrew J.; Gorton, Ian

    2004-06-20

    Research and development organizations are constantly evaluating new technologies in order to implement the next generation of advanced applications. At Pacific Northwest National Laboratory, agent technologies are perceived as an approach that can provide a competitive advantage in the construction of highly sophisticated software systems in a range of application areas. An important factor in selecting a successful agent architecture is the level of support it provides the developer in respect to developer support, examples of use, integration into current workflow and community support. Without such assistance, the developer must invest more effort into learning instead of applying the technology. Like many other applied research organizations, our staff are not dedicated to a single project and must acquire new skills as required, underlining the importance of being able to quickly become proficient. A project was instigated to evaluate three candidate agent toolkits across the dimensions of support they provide. This paper reports on the outcomes of this evaluation and provides insights into the agent technologies evaluated.

  17. The genetic architecture of adaptations to high altitude in Ethiopia.

    PubMed

    Alkorta-Aranburu, Gorka; Beall, Cynthia M; Witonsky, David B; Gebremedhin, Amha; Pritchard, Jonathan K; Di Rienzo, Anna

    2012-01-01

    Although hypoxia is a major stress on physiological processes, several human populations have survived for millennia at high altitudes, suggesting that they have adapted to hypoxic conditions. This hypothesis was recently corroborated by studies of Tibetan highlanders, which showed that polymorphisms in candidate genes show signatures of natural selection as well as well-replicated association signals for variation in hemoglobin levels. We extended genomic analysis to two Ethiopian ethnic groups: Amhara and Oromo. For each ethnic group, we sampled low and high altitude residents, thus allowing genetic and phenotypic comparisons across altitudes and across ethnic groups. Genome-wide SNP genotype data were collected in these samples by using Illumina arrays. We find that variants associated with hemoglobin variation among Tibetans or other variants at the same loci do not influence the trait in Ethiopians. However, in the Amhara, SNP rs10803083 is associated with hemoglobin levels at genome-wide levels of significance. No significant genotype association was observed for oxygen saturation levels in either ethnic group. Approaches based on allele frequency divergence did not detect outliers in candidate hypoxia genes, but the most differentiated variants between high- and lowlanders have a clear role in pathogen defense. Interestingly, a significant excess of allele frequency divergence was consistently detected for genes involved in cell cycle control and DNA damage and repair, thus pointing to new pathways for high altitude adaptations. Finally, a comparison of CpG methylation levels between high- and lowlanders found several significant signals at individual genes in the Oromo.

  18. A hybrid behavioural rule of adaptation and drift explains the emergent architecture of antagonistic networks

    PubMed Central

    Nuwagaba, S.; Zhang, F.; Hui, C.

    2015-01-01

    Ecological processes that can realistically account for network architectures are central to our understanding of how species assemble and function in ecosystems. Consumer species are constantly selecting and adjusting which resource species are to be exploited in an antagonistic network. Here we incorporate a hybrid behavioural rule of adaptive interaction switching and random drift into a bipartite network model. Predictions are insensitive to the model parameters and the initial network structures, and agree extremely well with the observed levels of modularity, nestedness and node-degree distributions for 61 real networks. Evolutionary and community assemblage histories only indirectly affect network structure by defining the size and complexity of ecological networks, whereas adaptive interaction switching and random drift carve out the details of network architecture at the faster ecological time scale. The hybrid behavioural rule of both adaptation and drift could well be the key processes for structure emergence in real ecological networks. PMID:25925104

  19. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication.

  20. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication. PMID:25980858

  1. Fast diffraction computation algorithms based on FFT

    NASA Astrophysics Data System (ADS)

    Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan

    2010-11-01

    The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.

  2. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies.

    PubMed

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-05-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima. PMID:25806542

  3. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies

    PubMed Central

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-01-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima. PMID:25806542

  4. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies.

    PubMed

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-05-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima.

  5. Spatially constrained adaptive rewiring in cortical networks creates spatially modular small world architectures.

    PubMed

    Jarman, Nicholas; Trengove, Chris; Steur, Erik; Tyukin, Ivan; van Leeuwen, Cees

    2014-12-01

    A modular small-world topology in functional and anatomical networks of the cortex is eminently suitable as an information processing architecture. This structure was shown in model studies to arise adaptively; it emerges through rewiring of network connections according to patterns of synchrony in ongoing oscillatory neural activity. However, in order to improve the applicability of such models to the cortex, spatial characteristics of cortical connectivity need to be respected, which were previously neglected. For this purpose we consider networks endowed with a metric by embedding them into a physical space. We provide an adaptive rewiring model with a spatial distance function and a corresponding spatially local rewiring bias. The spatially constrained adaptive rewiring principle is able to steer the evolving network topology to small world status, even more consistently so than without spatial constraints. Locally biased adaptive rewiring results in a spatial layout of the connectivity structure, in which topologically segregated modules correspond to spatially segregated regions, and these regions are linked by long-range connections. The principle of locally biased adaptive rewiring, thus, may explain both the topological connectivity structure and spatial distribution of connections between neuronal units in a large-scale cortical architecture.

  6. FFT-local gravimetric geoid computation

    NASA Technical Reports Server (NTRS)

    Nagy, Dezso; Fury, Rudolf J.

    1989-01-01

    Model computations show that changes of sampling interval introduce only 0.3 cm changes, whereas zero padding provides an improvement of more than 5 cm in the fast Fourier transformation (FFT) generated geoid. For the Global Positioning System (GPS) survey of Franklin County, Ohio, the parameters selected as a result of model computations, allow large reduction in local data requirements while still retaining the cm accuracy when tapering and padding is applied. The results are shown in tables.

  7. Predicting neutron diffusion eigenvalues with a query-based adaptive neural architecture.

    PubMed

    Lysenko, M G; Wong, H I; Maldonado, G I

    1999-01-01

    A query-based approach for adaptively retraining and restructuring a two-hidden-layer artificial neural network (ANN) has been developed for the speedy prediction of the fundamental mode eigenvalue of the neutron diffusion equation, a standard nuclear reactor core design calculation which normally requires the iterative solution of a large-scale system of nonlinear partial differential equations (PDE's). The approach developed focuses primarily upon the adaptive selection of training and cross-validation data and on artificial neural-network (ANN) architecture adjustments, with the objective of improving the accuracy and generalization properties of ANN-based neutron diffusion eigenvalue predictions. For illustration, the performance of a "bare bones" feedforward multilayer perceptron (MLP) is upgraded through a variety of techniques; namely, nonrandom initial training set selection, adjoint function input weighting, teacher-student membership and equivalence queries for generation of appropriate training data, and a dynamic node architecture (DNA) implementation. The global methodology is flexible in that it can "wrap around" any specific training algorithm selected for the static calculations (i.e., training iterations with a fixed training set and architecture). Finally, the improvements obtained are carefully contrasted against past works reported in the literature.

  8. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  9. A biomimetic adaptive algorithm and low-power architecture for implantable neural decoders.

    PubMed

    Rapoport, Benjamin I; Wattanapanitch, Woradorn; Penagos, Hector L; Musallam, Sam; Andersen, Richard A; Sarpeshkar, Rahul

    2009-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat.

  10. A Biomimetic Adaptive Algorithm and Low-Power Architecture for Implantable Neural Decoders

    PubMed Central

    Rapoport, Benjamin I.; Wattanapanitch, Woradorn; Penagos, Hector L.; Musallam, Sam; Andersen, Richard A.; Sarpeshkar, Rahul

    2010-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat. PMID:19964345

  11. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies.

    PubMed

    Supple, Megan A; Hines, Heather M; Dasmahapatra, Kanchon K; Lewis, James J; Nielsen, Dahlia M; Lavoie, Christine; Ray, David A; Salazar, Camilo; McMillan, W Owen; Counterman, Brian A

    2013-08-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations.

  12. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies

    PubMed Central

    Supple, Megan A.; Hines, Heather M.; Dasmahapatra, Kanchon K.; Lewis, James J.; Nielsen, Dahlia M.; Lavoie, Christine; Ray, David A.; Salazar, Camilo; McMillan, W. Owen; Counterman, Brian A.

    2013-01-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  13. AdaRTE: adaptable dialogue architecture and runtime engine. A new architecture for health-care dialogue systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2007-01-01

    Spoken dialogue systems have been increasingly employed to provide ubiquitous automated access via telephone to information and services for the non-Internet-connected public. In the health care context, dialogue systems have been successfully applied. Nevertheless, speech-based technology is not easy to implement because it requires a considerable development investment. The advent of VoiceXML for voice applications contributed to reduce the proliferation of incompatible dialogue interpreters, but introduced new complexity. As a response to these issues, we designed an architecture for dialogue representation and interpretation, AdaRTE, which allows developers to layout dialogue interactions through a high level formalism that offers both declarative and procedural features. AdaRTE aim is to provide a ground for deploying complex and adaptable dialogues whilst allows the experimentation and incremental adoption of innovative speech technologies. It provides the dynamic behavior of Augmented Transition Networks and enables the generation of different backends formats such as VoiceXML. It is especially targeted to the health care context, where a framework for easy dialogue deployment could reduce the barrier for a more widespread adoption of dialogue systems. PMID:17911878

  14. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  15. Experimental demonstration of an adaptive architecture for direct spectral imaging classification.

    PubMed

    Dunlop-Gray, Matthew; Poon, Phillip K; Golish, Dathon; Vera, Esteban; Gehm, Michael E

    2016-08-01

    Spectral imaging is a powerful tool for providing in situ material classification across a spatial scene. Typically, spectral imaging analyses are interested in classification, though often the classification is performed only after reconstruction of the spectral datacube. We present a computational spectral imaging system, the Adaptive Feature-Specific Spectral Imaging Classifier (AFSSI-C), which yields direct classification across the spatial scene without reconstruction of the source datacube. With a dual disperser architecture and a programmable spatial light modulator, the AFSSI-C measures specific projections of the spectral datacube which are generated by an adaptive Bayesian classification and feature design framework. We experimentally demonstrate multiple order-of-magnitude improvement of classification accuracy in low signal-to-noise (SNR) environments when compared to legacy spectral imaging systems.

  16. The Telesupervised Adaptive Ocean Sensor Fleet (TAOSF) Architecture: Coordination of Multiple Oceanic Robot Boats

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Higinbotham, John; Moisan, John R.; Moisan, Tiffany A.; Kulczycki, Eric A.

    2008-01-01

    Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth s climate and ecology. Ocean sensing is typically done with satellites, buoys, and crewed research ships. The limitations of these systems include the fact that satellites are often blocked by cloud cover, and buoys and ships have spatial coverage limitations. This paper describes a multi-robot science exploration software architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF supervises and coordinates a group of robotic boats, the OASIS platforms, to enable in-situ study of phenomena in the ocean/atmosphere interface, as well as on the ocean surface and sub-surface. The OASIS platforms are extended deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). TAOSF allows a human operator to effectively supervise and coordinate multiple robotic assets using a sliding autonomy control architecture, where the operating mode of the vessels ranges from autonomous control to teleoperated human control. TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for robotic asset tasking, control, and monitoring. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). We discuss the overall TAOSF architecture, describe field tests conducted under controlled conditions using rhodamine dye as a HAB simulant, present initial results from these tests, and outline the next steps in the development of TAOSF.

  17. Analog circuit design and implementation of an adaptive resonance theory (ART) neural network architecture

    NASA Astrophysics Data System (ADS)

    Ho, Ching S.; Liou, Juin J.; Georgiopoulos, Michael; Heileman, Gregory L.; Christodoulou, Christos G.

    1993-09-01

    This paper presents an analog circuit implementation for an adaptive resonance theory neural network architecture, called the augmented ART-1 neural network (AART1-NN). The AART1-NN is a modification of the popular ART1-NN, developed by Carpenter and Grossberg, and it exhibits the same behavior as the ART1-NN. The AART1-NN is a real-time model, and has the ability to classify an arbitrary set of binary input patterns into different clusters. The design of the AART1-NN model. The circuit is implemented by utilizing analog electronic components, such as, operational amplifiers, transistors, capacitors, and resistors. The implemented circuit is verified using the PSpice circuit simulator, running on Sun workstations. Results obtained from the PSpice circuit simulation compare favorably with simulation results produced by solving the differential equations numerically. The prototype system developed here can be used as a building block for larger AART1-NN architectures, as well as for other types of ART architectures that involve the AART1-NN model.

  18. Development and Flight Testing of an Adaptable Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.; Taylor, B. Douglas; Brett, Rube R.

    2003-01-01

    Development and testing of an adaptable wireless health-monitoring architecture for a vehicle fleet is presented. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained adaptable expert system. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate, and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear.

  19. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  20. Development and Flight Testing of an Adaptive Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Taylor, B. Douglas; Brett, Rube R.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.

    2002-01-01

    On going development and testing of an adaptable vehicle health-monitoring architecture is presented. The architecture is being developed for a fleet of vehicles. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle, and, a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained expert system. The expert system is parameterized, which makes it adaptable to be trained to both a user's subject reasoning and existing quantitative analytic tools. Communication between all levels is done with wireless radio frequency interfaces. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear. The flight tests were performed to validate the following: the wireless radio frequency communication capabilities of the system, the hardware design, command and control; software operation and, data acquisition, storage and retrieval.

  1. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions1[OPEN

    PubMed Central

    Sandhu, Nitika; Raman, K. Anitha; Torres, Rolando O.; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-01-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity. PMID:27342311

  2. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  3. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  4. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  5. A hardware architecture for a context-adaptive binary arithmetic coder

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania; Cohen, Adam

    2005-03-01

    The H.264 video compression standard uses a context-adaptive binary arithmetic coder (CABAC) as an entropy coding mechanism. While the coder provides excellent compression efficiency, it is computationally demanding. On typical general-purpose processors, it can take up to hundreds of cycles to encode a single bit. In this paper, we propose an architecture for a CABAC encoder that can easily be incorporated into system-on-chip designs for H.264 compression. The CABAC is inherently serial and we divide the problem into several stages to derive a design that can provide a throughput of two cycles per encoded bit. The engine proposed is capable of handling binarization of the syntactical elements and provides the coded bit-stream via a first-in first-out buffer. The design is implemented on an Altera FPGA platform that can run at 50 MHz enabling a 25 Mbps encoding rate.

  6. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  7. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training.

    PubMed

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3-5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = -0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  8. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training

    PubMed Central

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P.

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3–5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = −0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  9. Helix-length compensation studies reveal the adaptability of the VS ribozyme architecture

    PubMed Central

    Lacroix-Labonté, Julie; Girard, Nicolas; Lemieux, Sébastien; Legault, Pascale

    2012-01-01

    Compensatory mutations in RNA are generally regarded as those that maintain base pairing, and their identification forms the basis of phylogenetic predictions of RNA secondary structure. However, other types of compensatory mutations can provide higher-order structural and evolutionary information. Here, we present a helix-length compensation study for investigating structure–function relationships in RNA. The approach is demonstrated for stem-loop I and stem-loop V of the Neurospora VS ribozyme, which form a kissing–loop interaction important for substrate recognition. To rapidly characterize the substrate specificity (kcat/KM) of several substrate/ribozyme pairs, a procedure was established for simultaneous kinetic characterization of multiple substrates. Several active substrate/ribozyme pairs were identified, indicating the presence of limited substrate promiscuity for stem Ib variants and helix-length compensation between stems Ib and V. 3D models of the I/V interaction were generated that are compatible with the kinetic data. These models further illustrate the adaptability of the VS ribozyme architecture for substrate cleavage and provide global structural information on the I/V kissing–loop interaction. By exploring higher-order compensatory mutations in RNA our approach brings a deeper understanding of the adaptability of RNA structure, while opening new avenues for RNA research. PMID:22086962

  10. Interference fiber ring perimeter with FFT analysis

    NASA Astrophysics Data System (ADS)

    Vasinek, Vladimir; Vitasek, Jan; Hejduk, Stanislav; Bocheza, Jiri; Latal, Jan; Koudelka, Petr

    2011-11-01

    Fiber optical interferometers belong to highly sensitive equipments that are able to measure slight changes like distortion of shape, temperature and electric field variation and etc. Their great advantage is that they are insensitive on ageing component, from which they are composed of. It is in virtue of herewith, that there are evaluated no changes in optical signal intensity but number interference fringes. To monitor the movement of persons, eventually to analyze the changes in state of motion we developed method based on analysis the dynamic changes in interferometric pattern. We have used Mach- Zehnder interferometer with conventional SM and PM fibers excited with the DFB laser at wavelength of 1550 nm. It was terminated with optical receiver containing InGaAs PIN photodiode. Its output was brought into measuring card module that performs on FFT of the received interferometer signal. The signal rises with the composition of two waves passing through single interferometer arm. The optical fiber SMF 28e or PM PANDA fiber in one arm is referential; the second one is positioned on measuring slab at dimensions of 1×2m. A movement of persons over the slab was monitored, signal processed with FFT and frequency spectra were evaluated. They rose owing to dynamic changes of interferometric pattern. The results reflect that the individual subjects passing through slab embody characteristic frequency spectra, which are individual for particular persons. The scope of measuring frequencies proceeded from zero to 10 kHz. At experiments the stability of interferometric patterns was evaluated as from time aspects, so from the view of repeated identical experiments. Two kinds of balls (tennis and ping-pong) were used to plot the repeatability measurements and the gained spectra at repeated drops of balls were compared. Those stroked upon the same place and from the same elevation and dispersion of the obtained frequency spectra was evaluated. These experiments were performed

  11. An Architecture for Web-Based E-Learning Promoting Re-Usable Adaptive Educational E-Content.

    ERIC Educational Resources Information Center

    Sampson, Demetrios; Karagiannidis, Charalampos; Cardinali, Fabrizio

    2002-01-01

    Addresses the issue of reusability in personalized Web-based learning environments, focusing on the work of the European IST Project KOD (Knowledge on Demand), including the definition of an architecture and the implementation of a system that promotes re-usable adaptive educational e-content. (Author/LRW)

  12. Genetic architecture of a feeding adaptation: garter snake (Thamnophis) resistance to tetrodotoxin bearing prey.

    PubMed

    Feldman, Chris R; Brodie, Edmund D; Brodie, Edmund D; Pfrender, Michael E

    2010-11-01

    Detailing the genetic basis of adaptive variation in natural populations is a first step towards understanding the process of adaptive evolution, yet few ecologically relevant traits have been characterized at the genetic level in wild populations. Traits that mediate coevolutionary interactions between species are ideal for studying adaptation because of the intensity of selection and the well-characterized ecological context. We have previously described the ecological context, evolutionary history and partial genetic basis of tetrodotoxin (TTX) resistance in garter snakes (Thamnophis). Derived mutations in a voltage-gated sodium channel gene (Na(v)1.4) in three garter snake species are associated with resistance to TTX, the lethal neurotoxin found in their newt prey (Taricha). Here we evaluate the contribution of Na(v)1.4 alleles to TTX resistance in two of those species from central coastal California. We measured the phenotypes (TTX resistance) and genotypes (Na(v)1.4 and microsatellites) in a local sample of Thamnophis atratus and Thamnophis sirtalis. Allelic variation in Na(v)1.4 explains 23 per cent of the variation in TTX resistance in T. atratus while variation in a haphazard sample of the genome (neutral microsatellite markers) shows no association with the phenotype. Similarly, allelic variation in Na(v)1.4 correlates almost perfectly with TTX resistance in T. sirtalis, but neutral variation does not. These strong correlations suggest that Na(v)1.4 is a major effect locus. The simple genetic architecture of TTX resistance in garter snakes may significantly impact the dynamics of phenotypic coevolution. Fixation of a few alleles of major effect in some garter snake populations may have led to the evolution of extreme phenotypes and an 'escape' from the arms race with newts.

  13. A Fast Conformal Mapping Algorithm with No FFT

    NASA Astrophysics Data System (ADS)

    Luchini, P.; Manzo, F.

    1992-08-01

    An algorithm is presented for the computation of a conformal mapping discretized on a non-uniformly spaced point set, useful for the numerical solution of many problems of fluid dynamics. Most existing iterative techniques, both those having a linear and those having a quadratic type of convergence, rely on the fast Fourier transform ( FFT) algorithm for calculating a convolution integral which represents the most time-consuming phase of the computation. The FFT, however, definitely cannot be applied to a non-uniform spacing. The algorithm presented in this paper has been made possible by the construction of a calculation method for convolution integrals which, despite not using an FFT, maintains a computation time of the same order as that of the FFT. The new technique is successfully applied to the problem of conformally mapping a closely spaced cascade of airfoils onto a circle, which requires an exceedingly large number of points if it is solved with uniform spacing.

  14. The Arab Vernacular Architecture and its Adaptation to Mediterranean Climatic Zones

    NASA Astrophysics Data System (ADS)

    Paz, Shlomit; Hamza, Efat

    2014-05-01

    Throughout history people have employed building strategies adapted to local climatic conditions in an attempt to achieve thermal comfort in their homes. In the Mediterranean climate, a mixed strategy developed - utilizing positive parameters (e.g. natural lighting), while at the same time addressing negative variables (e.g. high temperatures during summer). This study analyzes the adaptation of construction strategies of traditional Arab houses to Mediterranean climatic conditions. It is based on the assumption that the climate of the eastern Mediterranean led to development of unique architectural patterns. The way in which the inhabitants chose to build their homes was modest but creative in the context of climate awareness, with simple ideas. These were often instinctive responses to climate challenges. Nine traditional Arab houses, built from the mid-19th century to the beginning of the 20th century, were analyzed in three different regions in Israel: the "Meshulash" - an area in the center of the country, and the Lower and Upper Galilees (in the north). In each region three houses were examined. It is important to note that only a few houses from these periods still remain, particularly in light of new construction in many of the villages' core areas. Qualitative research methodologies included documentation of all the elements of these traditional houses which were assumed to be a result of climatic factors, such as - house position (direction), thickness of walls, thermal mass, ceiling height, location of windows, natural ventilation, exterior wall colors and shading strategies. Additionally, air temperatures and relative humidity were measured at selected dates throughout all seasons both inside and immediately outside the houses during morning, noon, evening and night-time hours. The documentation of the architectural elements and strategies demonstrate that climatic considerations were an integral part of the planning and construction process of these

  15. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  16. The adaptive nature of eye movements in linguistic tasks: how payoff and architecture shape speed-accuracy trade-offs.

    PubMed

    Lewis, Richard L; Shvartsman, Michael; Singh, Satinder

    2013-07-01

    We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. PMID:23757203

  17. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.

  18. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  19. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems

    PubMed Central

    Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  20. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  1. Adapted Verbal Feedback, Instructor Interaction and Student Emotions in the Landscape Architecture Studio

    ERIC Educational Resources Information Center

    Smith, Carl A.; Boyer, Mark E.

    2015-01-01

    In light of concerns with architectural students' emotional jeopardy during traditional desk and final-jury critiques, the authors pursue alternative approaches intended to provide more supportive and mentoring verbal assessment in landscape architecture studios. In addition to traditional studio-based critiques throughout a semester, we provide…

  2. When History Repeats Itself: Exploring the Genetic Architecture of Host-Plant Adaptation in Two Closely Related Lepidopteran Species

    PubMed Central

    Alexandre, Hermine; Ponsard, Sergine; Bourguet, Denis; Vitalis, Renaud; Audiot, Philippe; Cros-Arteil, Sandrine; Streiff, Réjane

    2013-01-01

    The genus Ostrinia includes two allopatric maize pests across Eurasia, namely the European corn borer (ECB, O. nubilalis) and the Asian corn borer (ACB, O. furnacalis). A third species, the Adzuki bean borer (ABB, O. scapulalis), occurs in sympatry with both the ECB and the ACB. The ABB mostly feeds on native dicots, which probably correspond to the ancestral host plant type for the genus Ostrinia. This situation offers the opportunity to characterize the two presumably independent adaptations or preadaptations to maize that occurred in the ECB and ACB. In the present study, we aimed at deciphering the genetic architecture of these two adaptations to maize, a monocot host plant recently introduced into Eurasia. To this end, we performed a genome scan analysis based on 684 AFLP markers in 12 populations of ECB, ACB and ABB. We detected 2 outlier AFLP loci when comparing French populations of the ECB and ABB, and 9 outliers when comparing Chinese populations of the ACB and ABB. These outliers were different in both countries, and we found no evidence of linkage disequilibrium between any two of them. These results suggest that adaptation or preadaptation to maize relies on a different genetic architecture in the ECB and ACB. However, this conclusion must be considered in light of the constraints inherent to genome scan approaches and of the intricate evolution of adaptation and reproductive isolation in the Ostrinia spp. complex. PMID:23874914

  3. Engine Fault Diagnosis using DTW, MFCC and FFT

    NASA Astrophysics Data System (ADS)

    Singh, Vrijendra; Meena, Narendra

    . In this paper we have used a combination of three algorithms: Dynamic time warping (DTW) and the coefficients of Mel frequency Cepstrum (MFC) and Fast Fourier Transformation (FFT) for classifying various engine faults. Dynamic time warping and MFCC (Mel Frequency Cepstral Coefficients), FFT are used usually for automatic speech recognition purposes. This paper introduces DTW algorithm and the coefficients extracted from Mel Frequency Cepstrum, FFT for automatic fault detection and identification (FDI) of internal combustion engines for the first time. The objective of the current work was to develop a new intelligent system that should be able to predict the possible fault in a running engine at different-different workshops. We are doing this first time. Basically we took different-different samples of Engine fault and applied these algorithms, extracted features from it and used Fuzzy Rule Base approach for fault Classification.

  4. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  5. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    PubMed

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  6. Energy efficient low power shared-memory Fast Fourier Transform (FFT) processor with dynamic voltage scaling

    NASA Astrophysics Data System (ADS)

    Fitrio, D.; Singh, J.; Stojcevski, A.

    2005-12-01

    Reduction of power dissipations in CMOS circuits needs to be addressed for portable battery devices. Selection of appropriate transistor library to minimise leakage current, implementation of low power design architectures, power management implementation, and the choice of chip packaging, all have impact on power dissipation and are important considerations in design and implementation of integrated circuits for low power applications. Energy-efficient architecture is highly desirable for battery operated systems, which operates in a wide variation of operating scenarios. Energy-efficient design aims to reconfigure its own architectures to scale down energy consumption depending upon the throughput and quality requirement. An energy efficient system should be able to decide its minimum power requirements by dynamically scaling its own operating frequency, supply voltage or the threshold voltage according to a variety of operating scenarios. The increasing product demand for application specific integrated circuit or processor for independent portable devices has influenced designers to implement dedicated processors with ultra low power requirements. One of these dedicated processors is a Fast Fourier Transform (FFT) processor, which is widely used in signal processing for numerous applications such as, wireless telecommunication and biomedical applications where the demand for extended battery life is extremely high. This paper presents the design and performance analysis of a low power shared memory FFT processor incorporating dynamic voltage scaling. Dynamic voltage scaling enables power supply scaling into various supply voltage levels. The concept behind the proposed solution is that if the speed of the main logic core can be adjusted according to input load or amount of processor's computation "just enough" to meet the requirement. The design was implemented using 0.12 μm ST-Microelectronic 6-metal layer CMOS dual- process technology in Cadence Analogue

  7. Automatic Synthesis of Cost Effective FFT/IFFT Cores for VLSI OFDM Systems

    NASA Astrophysics Data System (ADS)

    L'Insalata, Nicola E.; Saponara, Sergio; Fanucci, Luca; Terreni, Pierangelo

    This work presents an FFT/IFFT core compiler particularly suited for the VLSI implementation of OFDM communication systems. The tool employs an architecture template based on the pipelined cascade principle. The generated cores support run-time programmable length and transform type selection, enabling seamless integration into multiple mode and multiple standard terminals. A distinctive feature of the tool is its accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results of the generated macrocells are presented for two deep sub-micron standard-cells libraries (65 and 90nm) and commercially available FPGA devices. When compared with other tools for automatic FFT core generation, the proposed environment produces macrocells with lower circuit complexity expressed as gate count and RAM/ROM bits, while keeping the same system level performance in terms of throughput, transform size and numerical accuracy.

  8. Adaptive software architecture based on confident HCI for the deployment of sensitive services in Smart Homes.

    PubMed

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature.

  9. Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes

    PubMed Central

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  10. Bone architecture adaptations after spinal cord injury: impact of long-term vibration of a constrained lower limb

    PubMed Central

    Dudley-Javoroski, S.; Petrie, M. A.; McHenry, C. L.; Amelon, R. E.; Saha, P. K.

    2015-01-01

    Summary This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. Introduction The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Methods Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Results Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a “steady state” of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. Conclusions This study supports that vibration training, using this study’s dose parameters, is not an effective antiosteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury. PMID:26395887

  11. A game-theoretic architecture for visible watermarking system of ACOCOA (adaptive content and contrast aware) technique

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Jen; Liu, Jung

    2011-12-01

    Digital watermarking techniques have been developed to protect the intellectual property. A digital watermarking system is basically judged based on two characteristics: security robustness and image quality. In order to obtain a robust visible watermarking in practice, we present a novel watermarking algorithm named adaptive content and contrast aware (ACOCOA), which considers the host image content and watermark texture. In addition, we propose a powerful security architecture against attacks for visible watermarking system which is based on game-theoretic approach that provides an equilibrium condition solution for the decision maker by studying the effects of transmission power on intensity and perceptual efficiency. The experimental results demonstrate that the feasibility of the proposed approach not only provides effectiveness and robustness for the watermarked images, but also allows the watermark encoder to obtain the best adaptive watermarking strategy under attacks.

  12. Architectural Models of Adaptive Hypermedia Based on the Use of Ontologies

    ERIC Educational Resources Information Center

    Souhaib, Aammou; Mohamed, Khaldi; Eddine, El Kadiri Kamal

    2011-01-01

    The domain of traditional hypermedia is revolutionized by the arrival of the concept of adaptation. Currently, the domain of AHS (adaptive hypermedia systems) is constantly growing. A major goal of current research is to provide a personalized educational experience that meets the needs specific to each learner (knowledge level, goals, motivation,…

  13. FFT-split-operator code for solving the Dirac equation in 2+1 dimensions

    NASA Astrophysics Data System (ADS)

    Mocken, Guido R.; Keitel, Christoph H.

    2008-06-01

    interrupted calculation. Additional comments: Along with the program's source code, we provide several sample configuration files, a pre-calculated bound state wave function, and template files for the analysis of the results with both MatLab and Igor Pro. Running time: Running time ranges from a few minutes for simple tests up to several days, even weeks for real-world physical problems that require very large grids or very small time steps. References:J.A. Fleck, J.R. Morris, M.D. Feit, Time-dependent propagation of high energy laser beams through the atmosphere, Appl. Phys. 10 (1976) 129-160. R. Heather, An asymptotic wavefunction splitting procedure for propagating spatially extended wavefunctions: Application to intense field photodissociation of H +2, Comput. Phys. Comm. 63 (1991) 446. M. Frigo, S.G. Johnson, FFTW: An adaptive software architecture for the FFT, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3, IEEE, 1998, pp. 1381-1384. M. Frigo, S.G. Johnson, The design and implementation of FFTW3, in: Proceedings of the IEEE, vol. 93, IEEE, 2005, pp. 216-231. URL: http://www.fftw.org/. M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, F. Rossi, GNU Scientific Library Reference Manual, second ed., Network Theory Limited, 2006. URL: http://www.gnu.org/software/gsl/. M.D. Feit, J.A. Fleck, A. Steiger, Solution of the Schrödinger equation by a spectral method, J. Comput. Phys. 47 (1982) 412-433.

  14. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  15. Real-Time, Polyphase-FFT, 640-MHz Spectrum Analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, George A.; Garyantes, Michael F.; Grimm, Michael J.; Charny, Bentsian; Brown, Randy D.; Wilck, Helmut C.

    1994-01-01

    Real-time polyphase-fast-Fourier-transform, polyphase-FFT, spectrum analyzer designed to aid in detection of multigigahertz radio signals in two 320-MHz-wide polarization channels. Spectrum analyzer divides total spectrum of 640 MHz into 33,554,432 frequency channels of about 20 Hz each. Size and cost of polyphase-coefficient memory substantially reduced and much of processing loss of windowed FFTs eliminated.

  16. Proceedings of the 13th annual international symposium on computer architecture

    SciTech Connect

    Not Available

    1986-01-01

    This book presents the papers given at a symposium which considered supercomputers and array processors. Topics covered at the symposium included knowledge bases, parallel architectures, artificial intelligence, VLSI, parallel algorithms, memory requirements, graph allocation, software implementation, performance analysis, parallel Prolog architectures, interconnections, Lisp machines, special purpose architectures, dataflow architectures, FFT machines, CPU architectures, matrix computation architectures, image processing, cache memory, pipeline architectures, and cache coherence.

  17. Unexpectedly low nitrogen acquisition and absence of root architecture adaptation to nitrate supply in a Medicago truncatula highly branched root mutant

    PubMed Central

    Bourion, Virginie

    2014-01-01

    To complement N2 fixation through symbiosis, legumes can efficiently acquire soil mineral N through adapted root architecture. However, root architecture adaptation to mineral N availability has been little studied in legumes. Therefore, this study investigated the effect of nitrate availability on root architecture in Medicago truncatula and assessed the N-uptake potential of a new highly branched root mutant, TR185. The effects of varying nitrate supply on both root architecture and N uptake were characterized in the mutant and in the wild type. Surprisingly, the root architecture of the mutant was not modified by variation in nitrate supply. Moreover, despite its highly branched root architecture, TR185 had a permanently N-starved phenotype. A transcriptome analysis was performed to identify genes differentially expressed between the two genotypes. This analysis revealed differential responses related to the nitrate acquisition pathway and confirmed that N starvation occurred in TR185. Changes in amino acid content and expression of genes involved in the phenylpropanoid pathway were associated with differences in root architecture between the mutant and the wild type. PMID:24706718

  18. Academic Accountability and University Adaptation: The Architecture of an Academic Learning Organization.

    ERIC Educational Resources Information Center

    Dill, David D.

    1999-01-01

    Discussses various adaptations in organizational structure and governance of academic learning institutions, using case studies of universities that are attempting to improve the quality of teaching and the learning process. Identifies five characteristics typical of such organizations: (1) a culture of evidence; (2) improved coordination of…

  19. Architecture for an Adaptive and Intelligent Tutoring System That Considers the Learner's Multiple Intelligences

    ERIC Educational Resources Information Center

    Hafidi, Mohamed; Bensebaa, Taher

    2015-01-01

    The majority of adaptive and intelligent tutoring systems (AITS) are dedicated to a specific domain, allowing them to offer accurate models of the domain and the learner. The analysis produced from traces left by the users is didactically very precise and specific to the domain in question. It allows one to guide the learner in case of difficulty…

  20. Evolution of genomic structural variation and genomic architecture in the adaptive radiations of African cichlid fishes.

    PubMed

    Fan, Shaohua; Meyer, Axel

    2014-01-01

    African cichlid fishes are an ideal system for studying explosive rates of speciation and the origin of diversity in adaptive radiation. Within the last few million years, more than 2000 species have evolved in the Great Lakes of East Africa, the largest adaptive radiation in vertebrates. These young species show spectacular diversity in their coloration, morphology and behavior. However, little is known about the genomic basis of this astonishing diversity. Recently, five African cichlid genomes were sequenced, including that of the Nile Tilapia (Oreochromis niloticus), a basal and only relatively moderately diversified lineage, and the genomes of four representative endemic species of the adaptive radiations, Neolamprologus brichardi, Astatotilapia burtoni, Metriaclima zebra, and Pundamila nyererei. Using the Tilapia genome as a reference genome, we generated a high-resolution genomic variation map, consisting of single nucleotide polymorphisms (SNPs), short insertions and deletions (indels), inversions and deletions. In total, around 18.8, 17.7, 17.0, and 17.0 million SNPs, 2.3, 2.2, 1.4, and 1.9 million indels, 262, 306, 162, and 154 inversions, and 3509, 2705, 2710, and 2634 deletions were inferred to have evolved in N. brichardi, A. burtoni, P. nyererei, and M. zebra, respectively. Many of these variations affected the annotated gene regions in the genome. Different patterns of genetic variation were detected during the adaptive radiation of African cichlid fishes. For SNPs, the highest rate of evolution was detected in the common ancestor of N. brichardi, A. burtoni, P. nyererei, and M. zebra. However, for the evolution of inversions and deletions, we found that the rates at the terminal taxa are substantially higher than the rates at the ancestral lineages. The high-resolution map provides an ideal opportunity to understand the genomic bases of the adaptive radiation of African cichlid fishes.

  1. Matrix-Vector Based Fast Fourier Transformations on SDR Architectures

    NASA Astrophysics Data System (ADS)

    He, Y.; Hueske, K.; Götze, J.; Coersmeier, E.

    2008-05-01

    Today Discrete Fourier Transforms (DFTs) are applied in various radio standards based on OFDM (Orthogonal Frequency Division Multiplex). It is important to gain a fast computational speed for the DFT, which is usually achieved by using specialized Fast Fourier Transform (FFT) engines. However, in face of the Software Defined Radio (SDR) development, more general (parallel) processor architectures are often desirable, which are not tailored to FFT computations. Therefore, alternative approaches are required to reduce the complexity of the DFT. Starting from a matrix-vector based description of the FFT idea, we will present different factorizations of the DFT matrix, which allow a reduction of the complexity that lies between the original DFT and the minimum FFT complexity. The computational complexities of these factorizations and their suitability for implementation on different processor architectures are investigated.

  2. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    A key constituent of the NASA Space Launch System (SLS) architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). This engine was selected largely due to the maturity and extensive experience gained through 30-plus years of service. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle and mounting it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration and operational details, there were also hardware upgrades needed. While the magnitude of effort is less than that needed to develop a new clean-sheet engine system, this paper describes some of the expected and unexpected challenges encountered to date on the path to the first flight of SLS.

  3. Elucidating the molecular architecture of adaptation via evolve and resequence experiments.

    PubMed

    Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier

    2015-10-01

    Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, then next-generation sequencing to analyse genetic changes. They enable molecular evolution to be monitored in real time on a genome-wide scale. Here, we review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and the complex multicellular organism Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different starting population sizes, levels of pre-existing genetic variation, recombination rates and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address.

  4. Elucidating the molecular architecture of adaptation via evolve and resequence experiments

    PubMed Central

    Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier

    2016-01-01

    Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, followed by next-generation sequencing. They enable molecular evolution to be monitored in real time at a genome-wide scale. We review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and complex multicellular Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different: starting population sizes, levels of pre-existing genetic variation, recombination rates, and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address. PMID:26347030

  5. Adaptive functional specialisation of architectural design and fibre type characteristics in agonist shoulder flexor muscles of the llama, Lama glama.

    PubMed

    Graziotti, Guillermo H; Chamizo, Verónica E; Ríos, Clara; Acevedo, Luz M; Rodríguez-Menéndez, J M; Victorica, C; Rivero, José-Luis L

    2012-08-01

    Like other camelids, llamas (Lama glama) have the natural ability to pace (moving ipsilateral limbs in near synchronicity). But unlike the Old World camelids (bactrian and dromedary camels), they are well adapted for pacing at slower or moderate speeds in high-altitude habitats, having been described as good climbers and used as pack animals for centuries. In order to gain insight into skeletal muscle design and to ascertain its relationship with the llama's characteristic locomotor behaviour, this study examined the correspondence between architecture and fibre types in two agonist muscles involved in shoulder flexion (M. teres major - TM and M. deltoideus, pars scapularis - DS and pars acromialis - DA). Architectural properties were found to be correlated with fibre-type characteristics both in DS (long fibres, low pinnation angle, fast-glycolytic fibre phenotype with abundant IIB fibres, small fibre size, reduced number of capillaries per fibre and low oxidative capacity) and in DA (short fibres, high pinnation angle, slow-oxidative fibre phenotype with numerous type I fibres, very sparse IIB fibres, and larger fibre size, abundant capillaries and high oxidative capacity). This correlation suggests a clear division of labour within the M. deltoideus of the llama, DS being involved in rapid flexion of the shoulder joint during the swing phase of the gait, and DA in joint stabilisation during the stance phase. However, the architectural design of the TM muscle (longer fibres and lower fibre pinnation angle) was not strictly matched with its fibre-type characteristics (very similar to those of the postural DA muscle). This unusual design suggests a dual function of the TM muscle both in active flexion of the shoulder and in passive support of the limb during the stance phase, pulling the forelimb to the trunk. This functional specialisation seems to be well suited to a quadruped species that needs to increase ipsilateral stability of the limb during the support

  6. The Architecture of Iron Microbial Mats Reflects the Adaptation of Chemolithotrophic Iron Oxidation in Freshwater and Marine Environments

    PubMed Central

    Chan, Clara S.; McAllister, Sean M.; Leavitt, Anna H.; Glazer, Brian T.; Krepski, Sean T.; Emerson, David

    2016-01-01

    Microbes form mats with architectures that promote efficient metabolism within a particular physicochemical environment, thus studying mat structure helps us understand ecophysiology. Despite much research on chemolithotrophic Fe-oxidizing bacteria, Fe mat architecture has not been visualized because these delicate structures are easily disrupted. There are striking similarities between the biominerals that comprise freshwater and marine Fe mats, made by Beta- and Zetaproteobacteria, respectively. If these biominerals are assembled into mat structures with similar functional morphology, this would suggest that mat architecture is adapted to serve roles specific to Fe oxidation. To evaluate this, we combined light, confocal, and scanning electron microscopy of intact Fe microbial mats with experiments on sheath formation in culture, in order to understand mat developmental history and subsequently evaluate the connection between Fe oxidation and mat morphology. We sampled a freshwater sheath mat from Maine and marine stalk and sheath mats from Loihi Seamount hydrothermal vents, Hawaii. Mat morphology correlated to niche: stalks formed in steeper O2 gradients while sheaths were associated with low to undetectable O2 gradients. Fe-biomineralized filaments, twisted stalks or hollow sheaths, formed the highly porous framework of each mat. The mat-formers are keystone species, with nascent marine stalk-rich mats comprised of novel and uncommon Zetaproteobacteria. For all mats, filaments were locally highly parallel with similar morphologies, indicating that cells were synchronously tracking a chemical or physical cue. In the freshwater mat, cells inhabited sheath ends at the growing edge of the mat. Correspondingly, time lapse culture imaging showed that sheaths are made like stalks, with cells rapidly leaving behind an Fe oxide filament. The distinctive architecture common to all observed Fe mats appears to serve specific functions related to chemolithotrophic Fe

  7. The Architecture of Iron Microbial Mats Reflects the Adaptation of Chemolithotrophic Iron Oxidation in Freshwater and Marine Environments.

    PubMed

    Chan, Clara S; McAllister, Sean M; Leavitt, Anna H; Glazer, Brian T; Krepski, Sean T; Emerson, David

    2016-01-01

    Microbes form mats with architectures that promote efficient metabolism within a particular physicochemical environment, thus studying mat structure helps us understand ecophysiology. Despite much research on chemolithotrophic Fe-oxidizing bacteria, Fe mat architecture has not been visualized because these delicate structures are easily disrupted. There are striking similarities between the biominerals that comprise freshwater and marine Fe mats, made by Beta- and Zetaproteobacteria, respectively. If these biominerals are assembled into mat structures with similar functional morphology, this would suggest that mat architecture is adapted to serve roles specific to Fe oxidation. To evaluate this, we combined light, confocal, and scanning electron microscopy of intact Fe microbial mats with experiments on sheath formation in culture, in order to understand mat developmental history and subsequently evaluate the connection between Fe oxidation and mat morphology. We sampled a freshwater sheath mat from Maine and marine stalk and sheath mats from Loihi Seamount hydrothermal vents, Hawaii. Mat morphology correlated to niche: stalks formed in steeper O2 gradients while sheaths were associated with low to undetectable O2 gradients. Fe-biomineralized filaments, twisted stalks or hollow sheaths, formed the highly porous framework of each mat. The mat-formers are keystone species, with nascent marine stalk-rich mats comprised of novel and uncommon Zetaproteobacteria. For all mats, filaments were locally highly parallel with similar morphologies, indicating that cells were synchronously tracking a chemical or physical cue. In the freshwater mat, cells inhabited sheath ends at the growing edge of the mat. Correspondingly, time lapse culture imaging showed that sheaths are made like stalks, with cells rapidly leaving behind an Fe oxide filament. The distinctive architecture common to all observed Fe mats appears to serve specific functions related to chemolithotrophic Fe

  8. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  9. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    Following the cancellation of the Constellation program and retirement of the Space Shuttle, NASA initiated the Space Launch System (SLS) program to provide next-generation heavy lift cargo and crew access to space. A key constituent of the SLS architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). The RS-25 was selected to serve as the main propulsion system for the SLS core stage in conjunction with the solid rocket boosters. This selection was largely based on the maturity and extensive experience gained through 135 missions, 3000+ ground tests, and over a million seconds total accumulated hot-fire time. In addition, there were also over a dozen functional flight assets remaining from the Space Shuttle program that could be leveraged to support the first four flights. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle boat-tail and installing it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration details involving changes to significant areas such as the environments, interface conditions, technical performance requirements, operational constraints and so on, there were other challenges to be overcome in the area of replacing the obsolete engine control system (ECS). While the magnitude of accomplishing this effort was less than that needed to develop and field a new clean-sheet engine system, the path to the first flight of SLS has not been without unexpected challenges.

  10. GENETIC ARCHITECTURE AND ADAPTIVE SIGNIFICANCE OF THE SELFING SYNDROME IN CAPSELLA

    PubMed Central

    Slotte, Tanja; Hazzouri, Khaled M.; Stern, David; Andolfatto, Peter; Wright, Stephen I.

    2016-01-01

    The transition from outcrossing to predominant self-fertilization is one of the most common evolutionary transitions in flowering plants. This shift is often accompanied by a suite of changes in floral and reproductive characters termed the selfing syndrome. Here, we characterize the genetic architecture and evolutionary forces underlying evolution of the selfing syndrome in Capsella rubella following its recent divergence from the outcrossing ancestor C. grandiflora. We conduct genotyping by multiplexed shotgun sequencing and map floral and reproductive traits in a large (N = 550) F2 population. Our results suggest that in contrast to previous studies of the selfing syndrome, changes at a few loci, some with major effects, have shaped the evolution of the selfing syndrome in Capsella. The directionality of QTL effects, as well as population genetic patterns of polymorphism and divergence at 318 loci, is consistent with a history of directional selection on the selfing syndrome. Our study is an important step toward characterizing the genetic basis and evolutionary forces underlying the evolution of the selfing syndrome in a genetically accessible model system. PMID:22519777

  11. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  12. Microprocessor implementation of an FFT for ionospheric VLF observations

    NASA Technical Reports Server (NTRS)

    Elvidge, J.; Kintner, P.; Holzworth, R.

    1984-01-01

    A fast Fourier transform algorithm is implemented on a CMOS microprocessor for application to very low-frequency electric fields (less than 10 kHz) sensed on high-altitude scientific balloons. Two FFT's are calculated simultaneously by associating them with conjugate symmetric and conjugate antisymmetric results. One goal of the system was to detect spectral signatures associated with fast time variations present in natural signals such as whistlers and chorus. Although a full evaluation of the system was not possible for operational reasons, a measure of the system's success has been defined and evaluated.

  13. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  14. Human Behavior & Low Energy Architecture: Linking Environmental Adaptation, Personal Comfort, & Energy Use in the Built Environment

    NASA Astrophysics Data System (ADS)

    Langevin, Jared

    Truly sustainable buildings serve to enrich the daily sensory experience of their human inhabitants while consuming the least amount of energy possible; yet, building occupants and their environmentally adaptive behaviors remain a poorly characterized variable in even the most "green" building design and operation approaches. This deficiency has been linked to gaps between predicted and actual energy use, as well as to eventual problems with occupant discomfort, productivity losses, and health issues. Going forward, better tools are needed for considering the human-building interaction as a key part of energy efficiency strategies that promote good Indoor Environmental Quality (IEQ) in buildings. This dissertation presents the development and implementation of a Human and Building Interaction Toolkit (HABIT), a framework for the integrated simulation of office occupants' thermally adaptive behaviors, IEQ, and building energy use as part of sustainable building design and operation. Development of HABIT begins with an effort to devise more reliable methods for predicting individual occupants' thermal comfort, considered the driving force behind the behaviors of focus for this project. A long-term field study of thermal comfort and behavior is then presented, and the data it generates are used to develop and validate an agent-based behavior simulation model. Key aspects of the agent-based behavior model are described, and its predictive abilities are shown to compare favorably to those of multiple other behavior modeling options. Finally, the agent-based behavior model is linked with whole building energy simulation in EnergyPlus, forming the full HABIT program. The program is used to evaluate the energy and IEQ impacts of several occupant behavior scenarios in the simulation of a case study office building for the Philadelphia climate. Results indicate that more efficient local heating/cooling options may be paired with wider set point ranges to yield up to 24

  15. A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture.

    PubMed

    Wong, H S; Guan, L

    2001-01-01

    We address the problem of adaptive regularization in image restoration by adopting a neural-network learning approach. Instead of explicitly specifying the local regularization parameter values, they are regarded as network weights which are then modified through the supply of appropriate training examples. The desired response of the network is in the form of a gray level value estimate of the current pixel using weighted order statistic (WOS) filter. However, instead of replacing the previous value with this estimate, this is used to modify the network weights, or equivalently, the regularization parameters such that the restored gray level value produced by the network is closer to this desired response. In this way, the single WOS estimation scheme can allow appropriate parameter values to emerge under different noise conditions, rather than requiring their explicit selection in each occasion. In addition, we also consider the separate regularization of edges and textures due to their different noise masking capabilities. This in turn requires discriminating between these two feature types. Due to the inability of conventional local variance measures to distinguish these two high variance features, we propose the new edge-texture characterization (ETC) measure which performs this discrimination based on a scalar value only. This is then incorporated into a fuzzified form of the previous neural network which determines the degree of membership of each high variance pixel in two fuzzy sets, the EDGE and TEXTURE fuzzy sets, from the local ETC value, and then evaluates the appropriate regularization parameter by appropriately combining these two membership function values.

  16. a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2013-02-01

    In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.

  17. Peeping into genomic architecture by re-sequencing of Ochrobactrum intermedium M86 strain during laboratory adapted conditions.

    PubMed

    Gohil, Kushal N; Neurgaonkar, Priya S; Paranjpe, Aditi; Dastager, Syed G; Dharne, Mahesh S

    2016-06-01

    Advances in de novo sequencing technologies allow us to track deeper insights into microbial genomes for restructuring events during the course of their evolution inside and outside the host. Bacterial species belonging to Ochrobactrum genus are being reported as emerging, and opportunistic pathogens in this technology driven era probably due to insertion and deletion of genes. The Ochrobactrum intermedium M86 was isolated in 2005 from a case of non-ulcer dyspeptic human stomach followed by its first draft genome sequence in 2009. Here we report re-sequencing of O. intermedium M86 laboratory adapted strain in terms of gain and loss of genes. We also attempted for finer scale genome sequence with 10 times more genome coverage than earlier one followed by comparative evaluation on Ion PGM and Illumina MiSeq. Despite their similarities at genomic level, lab-adapted strain mainly lacked genes encoding for transposase protein, insertion elements family, phage tail-proteins that were not detected in original strain on both chromosomes. Interestingly, a 5 kb indel was detected in chromosome 2 that was absent in original strain mapped with phage integrase gene of Rhizobium spp. and may be acquired and integrated through horizontal gene transfer indicating the gene loss and gene gain phenomenon in this genus. Majority of indel fragments did not match with known genes indicating more bioinformatic dissection of this fragment. Additionally we report genes related to antibiotic resistance, heavy metal tolerance in earlier and re-sequenced strain. Though SNPs detected, there did not span urease and flagellar genes. We also conclude that third generation sequencing technologies might be useful for understanding genomic architecture and re-arrangement of genes in the genome due to their ability of larger coverage that can be used to trace evolutionary aspects in microbial system. PMID:27222803

  18. Adaptive line enhancers for fast acquisition

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1994-01-01

    Three adaptive line enhancer (ALE) algorithms and architectures - namely, conventional ALE, ALE with double filtering, and ALE with coherent accumulation - are investigated for fast carrier acquisition in the time domain. The advantages of these algorithms are their simplicity, flexibility, robustness, and applicability to general situations including the Earth-to-space uplink carrier acquisition and tracking of the spacecraft. In the acquisition mode, these algorithms act as bandpass filters; hence, the carrier-to-noise ratio (CNR) is improved for fast acquisition. In the tracking mode, these algorithms simply act as lowpass filters to improve signal-to-noise ratio; hence, better tracking performance is obtained. It is not necessary to have a priori knowledge of the received signal parameters, such as CNR, Doppler, and carrier sweeping rate. The implementation of these algorithms is in the time domain (as opposed to the frequency domain, such as the fast Fourier transform (FFT)). The carrier frequency estimation can be updated in real time at each time sample (as opposed to the batch processing of the FFT). The carrier frequency to be acquired can be time varying, and the noise can be non-Gaussian, nonstationary, and colored.

  19. A Method for Finding Unknown Signals Using Reinforcement FFT Differencing

    SciTech Connect

    Charles R. Tolle; John W. James

    2009-12-01

    This note addresses a simple yet powerful method of discovering the spectral character of an unknown but intermittent signal buried in a background made up of a distribution of other signals. Knowledge of when the unknown signal is present and when it is not, along with samples of the combined signal when the unknown signal is present and when it is not are all that is necessary for this method. The method is based on reinforcing Fast Fourier Transform (FFT) power spectra when the signal of interest occurs and subtracting spectra when it does not. Several examples are presented. This method could be used to discover spectral components of unknown chemical species within spectral analysis instruments such as Mass Spectroscopy, Fourier Transform Infrared Spectroscopy (FTIR) and Gas Chromatography. In addition, this method can be used to isolate device loading signatures on power transmission lines.

  20. Simulation of multicorrelated random processes using the FFT algorithm

    NASA Technical Reports Server (NTRS)

    Wittig, L. E.; Sinha, A. K.

    1975-01-01

    A technique for the digital simulation of multicorrelated Gaussian random processes is described. This technique is based upon generating discrete frequency functions which correspond to the Fourier transform of the desired random processes, and then using the fast Fourier transform (FFT) algorithm to obtain the actual random processes. The main advantage of this method of simulation over other methods is computation time; it appears to be more than an order of magnitude faster than present methods of simulation. One of the main uses of multicorrelated simulated random processes is in solving nonlinear random vibration problems by numerical integration of the governing differential equations. The response of a nonlinear string to a distributed noise input is presented as an example.

  1. ECG compression: evaluation of FFT, DCT, and WT performance.

    PubMed

    GholamHosseini, H; Nazeran, H; Moran, B

    1998-12-01

    This work investigates a set of ECG data compression schemes to compare their performances in compressing and preparing ECG signals for automatic cardiac arrhythmia classification. These schemes are based on transform methods such as fast Fourier transform (FFT), discrete cosine transform (DCT), wavelet transform (WT), and their combinations. Each specific transform is applied to a pre-selected data segment from the MIT-BIH database and then compression is performed in the new domain. These transformation methods are known as an important class of ECG compression techniques. The WT has been shown as the most efficient method for further improvement. A compression ratio of 7.98 to 1 has been achieved with a percent of root mean square difference (PRD) of 0.25%, indicating that the wavelet compression technique offers the best performance over the other evaluated methods.

  2. Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems

    NASA Astrophysics Data System (ADS)

    Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao

    2016-02-01

    A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.

  3. An 8×8/4×4 Adaptive Hadamard Transform Based FME VLSI Architecture for 4K×2K H.264/AVC Encoder

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Liu, Jialiang; Zhang, Dexue; Zeng, Xiaoyang; Chen, Xinhua

    Fidelity Range Extension (FRExt) (i.e. High Profile) was added to the H.264/AVC recommendation in the second version. One of the features included in FRExt is the Adaptive Block-size Transform (ABT). In order to conform to the FRExt, a Fractional Motion Estimation (FME) architecture is proposed to support the 8×8/4×4 adaptive Hadamard Transform (8×8/4×4 AHT). The 8×8/4×4 AHT circuit contributes to higher throughput and encoding performance. In order to increase the utilization of SATD (Sum of Absolute Transformed Difference) Generator (SG) in unit time, the proposed architecture employs two 8-pel interpolators (IP) to time-share one SG. These two IPs can work in turn to provide the available data continuously to the SG, which increases the data throughput and significantly reduces the cycles that are needed to process one Macroblock. Furthermore, this architecture also exploits the linear feature of Hadamard Transform to generate the quarter-pel SATD. This method could help to shorten the long datapath in the second-step of two-iteration FME algorithm. Finally, experimental results show that this architecture could be used in the applications requiring different performances by adjusting the supported modes and operation frequency. It can support the real-time encoding of the seven-mode 4K×2K@24fps or six-mode 4K×2K@30fps video sequences.

  4. The TurboLAN project. Phase 1: Protocol choices for high speed local area networks. Phase 2: TurboLAN Intelligent Network Adapter Card, (TINAC) architecture

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The hardware and the software architecture of the TurboLAN Intelligent Network Adapter Card (TINAC) are described. A high level as well as detailed treatment of the workings of various components of the TINAC are presented. The TINAC is divided into the following four major functional units: (1) the network access unit (NAU); (2) the buffer management unit; (3) the host interface unit; and (4) the node processor unit.

  5. Custom instruction for NIOS II processor FFT implementation for image processing

    NASA Astrophysics Data System (ADS)

    Sundararajana, Sindhuja; Meyer-Baese, Uwe; Botella, Guillermo

    2016-05-01

    Image processing can be considered as signal processing in two dimensions (2D). Filtering is one of the basic image processing operation. Filtering in frequency domain is computationally faster when compared to the corresponding spatial domain operation as the complex convolution process is modified as multiplication in frequency domain. The popular 2D transforms used in image processing are Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The common values for resolution of an image are 640x480, 800x600, 1024x768 and 1280x1024. As it can be seen, the image formats are generally not a power of 2. So power of 2 FFT lengths are not required and these cannot be built using shorter Discrete Fourier Transform (DFT) blocks. Split radix based FFT algorithms like Good-Thomas FFT algorithm simplifies the implementation logic required for such applications and hence can be implemented in low area and power consumption and also meet the timing constraints thereby operating at high frequency. The Good-Thomas FFT algorithm which is a Prime Factor FFT algorithm (PFA) provides the means of computing DFT with least number of multiplication and addition operations. We will be providing an Altera FPGA based NIOS II custom instruction implementation of Good-Thomas FFT algorithm to improve the system performance and also provide the comparison when the same algorithm is completely implemented in software.

  6. A finite element conjugate gradient FFT method for scattering

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Zapp, John; Hsa, Chang-Yu; Volakis, John L.

    1990-01-01

    An extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation (FFT) is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. By virtue of the finite element method, the algorithm is applicable to structures of arbitrary material composition. Several improvements to the two dimensional algorithm are also described. These include: (1) modifications for terminating the mesh at circular boundaries without distorting the convolutionality of the boundary integrals; (2) the development of nonproprietary mesh generation routines for two dimensional applications; (3) the development of preprocessors for interfacing SDRC IDEAS with the main algorithm; and (4) the development of post-processing algorithms based on the public domain package GRAFIC to generate two and three dimensional gray level and color field maps.

  7. A biologically based model for the integration of sensory-motor contingencies in rules and plans: a prefrontal cortex based extension of the Distributed Adaptive Control architecture.

    PubMed

    Duff, Armin; Fibla, Marti Sanchez; Verschure, Paul F M J

    2011-06-30

    Intelligence depends on the ability of the brain to acquire and apply rules and representations. At the neuronal level these properties have been shown to critically depend on the prefrontal cortex. Here we present, in the context of the Distributed Adaptive Control architecture (DAC), a biologically based model for flexible control and planning based on key physiological properties of the prefrontal cortex, i.e. reward modulated sustained activity and plasticity of lateral connectivity. We test the model in a series of pertinent tasks, including multiple T-mazes and the Tower of London that are standard experimental tasks to assess flexible control and planning. We show that the model is both able to acquire and express rules that capture the properties of the task and to quickly adapt to changes. Further, we demonstrate that this biomimetic self-contained cognitive architecture generalizes to planning. In addition, we analyze the extended DAC architecture, called DAC 6, as a model that can be applied for the creation of intelligent and psychologically believable synthetic agents. PMID:21138760

  8. On the application of pseudo-spectral FFT technique to non-periodic problems

    NASA Technical Reports Server (NTRS)

    Biringen, S.; Kao, K. H.

    1988-01-01

    The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.

  9. STS-33 EVA Prep and Post with Gregory, Blaha, Carter, Thorton, and Musgrave in FFT

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This video shows the crew in the airlock of the FFT, talking with technicians about the extravehicular activity (EVA) equipment. Thornton and Carter put on EVA suits and enter the airlock as the other crew members help with checklists.

  10. Design and Performance of Overlap FFT Filter-Bank for Dynamic Spectrum Access Applications

    NASA Astrophysics Data System (ADS)

    Tanabe, Motohiro; Umehira, Masahiro

    An OFDMA-based (Orthogonal Frequency Division Multiple Access-based) channel access scheme for dynamic spectrum access has the drawbacks of large PAPR (Peak to Average Power Ratio) and large ACI (Adjacent Channel Interference). To solve these problems, a flexible channel access scheme using an overlap FFT filter-bank was proposed based on single carrier modulation for dynamic spectrum access. In order to apply the overlap FFT filter-bank for dynamic spectrum access, it is necessary to clarify the performance of the overlap FFT filter-bank according to the design parameters since its frequency characteristics are critical for dynamic spectrum access applications. This paper analyzes the overlap FFT filter-bank and evaluates its performance such as frequency characteristics and ACI performance according to the design parameters.

  11. Group 12, 1987 ASCAN C. Michael Foale sits at the pilots station in JSC's FFT

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Group 12, 1987 Astronaut Candidate (ASCAN) C. Michael Foale sits at the forward flight deck pilots station controls in JSC's Full Fuselage Trainer (FFT). The FFT is used to familiarize the astronauts with the hardware in the cockpit of the Space Shuttle orbiters. It is one of the mockup training devices located in the Mockup and Integration Laboratory (MAIL) Bldg 9NE. Foale is one of 15 ASCANs recently selected by NASA.

  12. 1-FFT amino acids involved in high DP inulin accumulation in Viguiera discolor

    PubMed Central

    De Sadeleer, Emerik; Vergauwen, Rudy; Struyf, Tom; Le Roy, Katrien; Van den Ende, Wim

    2015-01-01

    Fructans are important vacuolar reserve carbohydrates with drought, cold, ROS and general abiotic stress mediating properties. They occur in 15% of all flowering plants and are believed to display health benefits as a prebiotic and dietary fiber. Fructans are synthesized by specific fructosyltransferases and classified based on the linkage type between fructosyl units. Inulins, one of these fructan types with β(2-1) linkages, are elongated by fructan:fructan 1-fructosyltransferases (1-FFT) using a fructosyl unit from a donor inulin to elongate the acceptor inulin molecule. The sequence identity of the 1-FFT of Viguiera discolor (Vd) and Helianthus tuberosus (Ht) is 91% although these enzymes produce distinct fructans. The Vd 1-FFT produces high degree of polymerization (DP) inulins by preferring the elongation of long chain inulins, in contrast to the Ht 1-FFT which prefers small molecules (DP3 or 4) as acceptor. Since higher DP inulins have interesting properties for industrial, food and medical applications, we report here on the influence of two amino acids on the high DP inulin production capacity of the Vd 1-FFT. Introducing the M19F and H308T mutations in the active site of the Vd 1-FFT greatly reduces its capacity to produce high DP inulin molecules. Both amino acids can be considered important to this capacity, although the double mutation had a much higher impact than the single mutations. PMID:26322058

  13. ACTIVE-EYES: an adaptive pixel-by-pixel image-segmentation sensor architecture for high-dynamic-range hyperspectral imaging.

    PubMed

    Christensen, Marc P; Euliss, Gary W; McFadden, Michael J; Coyle, Kevin M; Milojkovic, Predrag; Haney, Michael W; van der Gracht, Joeseph; Athale, Ravindra A

    2002-10-10

    The ACTIVE-EYES (adaptive control for thermal imagers via electro-optic elements to yield an enhanced sensor) architecture, an adaptive image-segmentation and processing architecture, based on digital micromirror (DMD) array technology, is described. The concept provides efficient front-end processing of multispectral image data by adaptively segmenting and routing portions of the scene data concurrently to an imager and a spectrometer. The goal is to provide a large reduction in the amount of data required to be sensed in a multispectral imager by means of preprocessing the data to extract the most useful spatial and spectral information during detection. The DMD array provides the flexibility to perform a wide range of spatial and spectral analyses on the scene data. The spatial and spectral processing for different portions of the input scene can be tailored in real time to achieve a variety of preprocessing functions. Since the detected intensity of individual pixels may be controlled, the spatial image can be analyzed with gain varied on a pixel-by-pixel basis to enhance dynamic range. Coarse or fine spectral resolution can be achieved in the spectrometer by use of dynamically controllable or addressable dispersion elements. An experimental prototype, which demonstrated the segmentation between an imager and a grating spectrometer, was demonstrated and shown to achieve programmable pixelated intensity control. An information theoretic analysis of the dynamic-range control aspect was conducted to predict the performance enhancements that might be achieved with this architecture. The results indicate that, with a properly configured algorithm, the concept achieves the greatest relative information recovery from a detected image when the scene is made up of a relatively large area of moderate-dynamic-range pixels and a relatively smaller area of strong pixels that would tend to saturate a conventional sensor. PMID:12389978

  14. 2D-FFT implementation on FPGA for wavefront phase recovery from the CAFADIS camera

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Magdaleno Castelló, E.; Domínguez Conde, C.; Rodríguez Valido, M.; Marichal-Hernández, J. G.

    2008-07-01

    The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic. GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation than common software packages running on top of CPUs. Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the viability of the FPGAs for AO in the ELTs.

  15. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  16. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    PubMed

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions. PMID:27128036

  17. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    PubMed

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.

  18. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  19. An analysis of the double-precision floating-point FFT on FPGAs.

    SciTech Connect

    Hemmert, K. Scott; Underwood, Keith Douglas

    2005-01-01

    Advances in FPGA technology have led to dramatic improvements in double precision floating-point performance. Modern FPGAs boast several GigaFLOPs of raw computing power. Unfortunately, this computing power is distributed across 30 floating-point units with over 10 cycles of latency each. The user must find two orders of magnitude more parallelism than is typically exploited in a single microprocessor; thus, it is not clear that the computational power of FPGAs can be exploited across a wide range of algorithms. This paper explores three implementation alternatives for the fast Fourier transform (FFT) on FPGAs. The algorithms are compared in terms of sustained performance and memory requirements for various FFT sizes and FPGA sizes. The results indicate that FPGAs are competitive with microprocessors in terms of performance and that the 'correct' FFT implementation varies based on the size of the transform and the size of the FPGA.

  20. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  1. Pipelined digital SAR azimuth correlator using hybrid FFT-transversal filter

    NASA Technical Reports Server (NTRS)

    Wu, C.; Liu, K. Y. (Inventor)

    1984-01-01

    A synthetic aperture radar system (SAR) having a range correlator is provided with a hybrid azimuth correlator which utilizes a block-pipe-lined fast Fourier transform (FFT). The correlator has a predetermined FFT transform size with delay elements for delaying SAR range correlated data so as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. The azimuth correlator is comprised of a transversal filter to receive the SAR data in the frequency domain, a generator for range migration compensation and azimuth reference functions, and an azimuth reference multiplier for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT used to restore azimuth correlated data in the frequency domain to the time domain for imaging.

  2. A Novel Dynamic Channel Access Scheme Using Overlap FFT Filter-Bank for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    Tanabe, Motohiro; Umehira, Masahiro; Ishihara, Koichi; Takatori, Yasushi

    An OFDMA based channel access scheme is proposed for dynamic spectrum access to utilize frequency spectrum efficiently. Though the OFDMA based scheme is flexible enough to change the bandwidth and channel of the transmitted signals, the OFDMA signal has large PAPR (Peak to Average Power Ratio). In addition, if the OFDMA receiver does not use a filter to extract sub-carriers before FFT (Fast Fourier Transform) processing, the designated sub-carriers suffer large interference from the adjacent channel signals in the FFT processing on the receiving side. To solve the problems such as PAPR and adjacent channel interference encountered in the OFDMA based scheme, this paper proposes a novel dynamic channel access scheme using overlap FFT filter-bank based on single carrier modulation. It also shows performance evaluation results of the proposed scheme by computer simulation.

  3. Adaptive and Speculative Memory Consistency Support for Multi-core Architectures with On-Chip Local Memories

    NASA Astrophysics Data System (ADS)

    Vujic, Nikola; Alvarez, Lluc; Tallada, Marc Gonzalez; Martorell, Xavier; Ayguadé, Eduard

    Software cache has been showed as a robust approach in multi-core systems with no hardware support for transparent data transfers between local and global memories. Software cache provides the user with a transparent view of the memory architecture and considerably improves the programmability of such systems. But this software approach can suffer from poor performance due to considerable overheads related to software mechanisms to maintain the memory consistency. This paper presents a set of alternatives to smooth their impact. A specific write-back mechanism is introduced based on some degree of speculation regarding the number of threads actually modifying the same cache lines. A case study based on the Cell BE processor is described. Performance evaluation indicates that improvements due to the optimized software-cache structures combined with the proposed code-optimizations translate into 20% up to 40% speedup factors, compared to a traditional software cache approach.

  4. Do regional modifications in tissue mineral content and microscopic mineralization heterogeneity adapt trabecular bone tracts for habitual bending? Analysis in the context of trabecular architecture of deer calcanei.

    PubMed

    Skedros, John G; Knight, Alex N; Farnsworth, Ryan W; Bloebaum, Roy D

    2012-03-01

    Calcanei of mature mule deer have the largest mineral content (percent ash) difference between their dorsal 'compression' and plantar 'tension' cortices of any bone that has been studied. The opposing trabecular tracts, which are contiguous with the cortices, might also show important mineral content differences and microscopic mineralization heterogeneity (reflecting increased hemi-osteonal renewal) that optimize mechanical behaviors in tension vs. compression. Support for these hypotheses could reveal a largely unrecognized capacity for phenotypic plasticity - the adaptability of trabecular bone material as a means for differentially enhancing mechanical properties for local strain environments produced by habitual bending. Fifteen skeletally mature and 15 immature deer calcanei were cut transversely into two segments (40% and 50% shaft length), and cores were removed to determine mineral (ash) content from 'tension' and 'compression' trabecular tracts and their adjacent cortices. Seven bones/group were analyzed for differences between tracts in: first, microscopic trabecular bone packets and mineralization heterogeneity (backscattered electron imaging, BSE); and second, trabecular architecture (micro-computed tomography). Among the eight architectural characteristics evaluated [including bone volume fraction (BVF) and structural model index (SMI)]: first, only the 'tension' tract of immature bones showed significantly greater BVF and more negative SMI (i.e. increased honeycomb morphology) than the 'compression' tract of immature bones; and second, the 'compression' tracts of both groups showed significantly greater structural order/alignment than the corresponding 'tension' tracts. Although mineralization heterogeneity differed between the tracts in only the immature group, in both groups the mineral content derived from BSE images was significantly greater (P < 0.01), and bulk mineral (ash) content tended to be greater in the 'compression' tracts (immature 3

  5. The Fun30 chromatin remodeler Fft3 controls nuclear organization and chromatin structure of insulators and subtelomeres in fission yeast.

    PubMed

    Steglich, Babett; Strålfors, Annelie; Khorosjutina, Olga; Persson, Jenna; Smialowska, Agata; Javerzat, Jean-Paul; Ekwall, Karl

    2015-03-01

    In eukaryotic cells, local chromatin structure and chromatin organization in the nucleus both influence transcriptional regulation. At the local level, the Fun30 chromatin remodeler Fft3 is essential for maintaining proper chromatin structure at centromeres and subtelomeres in fission yeast. Using genome-wide mapping and live cell imaging, we show that this role is linked to controlling nuclear organization of its targets. In fft3∆ cells, subtelomeres lose their association with the LEM domain protein Man1 at the nuclear periphery and move to the interior of the nucleus. Furthermore, genes in these domains are upregulated and active chromatin marks increase. Fft3 is also enriched at retrotransposon-derived long terminal repeat (LTR) elements and at tRNA genes. In cells lacking Fft3, these sites lose their peripheral positioning and show reduced nucleosome occupancy. We propose that Fft3 has a global role in mediating association between specific chromatin domains and the nuclear envelope.

  6. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  7. Push or Pull? The light-weight architecture of the Daphnia pulex carapace is adapted to withstand tension, not compression.

    PubMed

    Kruppert, Sebastian; Horstmann, Martin; Weiss, Linda C; Schaber, Clemens F; Gorb, Stanislav N; Tollrian, Ralph

    2016-10-01

    Daphnia (Crustacea, Cladocera) are well known for their ability to form morphological adaptations to defend against predators. In addition to spines and helmets, the carapace itself is a protective structure encapsulating the main body, but not the head. It is formed by a double layer of the integument interconnected by small pillars and hemolymphatic space in between. A second function of the carapace is respiration, which is performed through its proximal integument. The interconnecting pillars were previously described as providing higher mechanical stability against compressive forces. Following this hypothesis, we analyzed the carapace structure of D. pulex using histochemistry in combination with light and electron microscopy. We found the distal integument of the carapace to be significantly thicker than the proximal. The pillars appear fibrous with slim waists and broad, sometimes branched bases where they meet the integument layers. The fibrous structure and the slim-waisted shape of the pillars indicate a high capacity for withstanding tensile rather than compressive forces. In conclusion they are more ligaments than pillars. Therefore, we measured the hemolymphatic gauge pressure in D. longicephala and indeed found the hemocoel to have a pressure above ambient. Our results offer a new mechanistic explanation of the high rigidity of the daphniid carapace, which is probably the result of a light-weight construction consisting of two integuments bound together by ligaments and inflated by a hydrostatic hyper-pressure in the hemocoel. J. Morphol. 277:1320-1328, 2016. © 2016 Wiley Periodicals, Inc. PMID:27418246

  8. Perspectives in Magnetic Resonance: NMR in the Post-FFT Era

    PubMed Central

    Hyberts, Sven G.; Arthanari, Haribabu; Robson, Scott A.; Wagner, Gerhard

    2014-01-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-unifom sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR. PMID:24656081

  9. Non-uniform MR image reconstruction based on non-uniform FFT

    NASA Astrophysics Data System (ADS)

    Liang, Xiao-yun; Zeng, Wei-ming; Dong, Zhi-hua; Zhang, Zhi-jiang; Luo, Li-min

    2007-01-01

    A Non-Uniform Fast Fourier Transform (NUFFT) based method for non-Cartesian k-space data reconstruction is presented. For Cartesian K-space data, as we all know, image can be reconstructed using 2DFFT directly. But, as far as know, this method has not been universally accepted nowadays because of its inevitable disadvantages. On the contrary, non-Cartesian method is of the advantage over it, so we focused on the method usually. The most straightforward approach for the reconstruction of non-Cartesian data is directly via a Fourier summation. However, the computational complexity of the direct method is usually much greater than an approach that uses the efficient FFT. But the FFT requires that data be sampled on a uniform Cartesian grid in K-space, and a NUFFT based method is of much importance. Finally, experimental results which are compared with existing method are given.

  10. Perspectives in magnetic resonance: NMR in the post-FFT era

    NASA Astrophysics Data System (ADS)

    Hyberts, Sven G.; Arthanari, Haribabu; Robson, Scott A.; Wagner, Gerhard

    2014-04-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-uniform sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR.

  11. Perspectives in magnetic resonance: NMR in the post-FFT era.

    PubMed

    Hyberts, Sven G; Arthanari, Haribabu; Robson, Scott A; Wagner, Gerhard

    2014-04-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-uniform sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR. PMID:24656081

  12. Fitting FFT-derived spectra: Theory, tool, and application to solar radio spike decomposition

    SciTech Connect

    Nita, Gelu M.; Fleishman, Gregory D.; Gary, Dale E.; Marin, William; Boone, Kristine

    2014-07-10

    Spectra derived from fast Fourier transform (FFT) analysis of time-domain data intrinsically contain statistical fluctuations whose distribution depends on the number of accumulated spectra contributing to a measurement. The tail of this distribution, which is essential for separating the true signal from the statistical fluctuations, deviates noticeably from the normal distribution for a finite number of accumulations. In this paper, we develop a theory to properly account for the statistical fluctuations when fitting a model to a given accumulated spectrum. The method is implemented in software for the purpose of automatically fitting a large body of such FFT-derived spectra. We apply this tool to analyze a portion of a dense cluster of spikes recorded by our FASR Subsystem Testbed instrument during a record-breaking event that occurred on 2006 December 6. The outcome of this analysis is briefly discussed.

  13. Variation in photosynthetic performance and hydraulic architecture across European beech (Fagus sylvatica L.) populations supports the case for local adaptation to water stress.

    PubMed

    Aranda, Ismael; Cano, Francisco Javier; Gascó, Antonio; Cochard, Hervé; Nardini, Andrea; Mancha, Jose Antonio; López, Rosana; Sánchez-Gómez, David

    2015-01-01

    The aim of this study was to provide new insights into how intraspecific variability in the response of key functional traits to drought dictates the interplay between gas-exchange parameters and the hydraulic architecture of European beech (Fagus sylvatica L.). Considering the relationships between hydraulic and leaf functional traits, we tested whether local adaptation to water stress occurs in this species. To address these objectives, we conducted a glasshouse experiment in which 2-year-old saplings from six beech populations were subjected to different watering treatments. These populations encompassed central and marginal areas of the range, with variation in macro- and microclimatic water availability. The results highlight subtle but significant differences among populations in their functional response to drought. Interpopulation differences in hydraulic traits suggest that vulnerability to cavitation is higher in populations with higher sensitivity to drought. However, there was no clear relationship between variables related to hydraulic efficiency, such as xylem-specific hydraulic conductivity or stomatal conductance, and those that reflect resistance to xylem cavitation (i.e., Ψ(12), the water potential corresponding to a 12% loss of stem hydraulic conductivity). The results suggest that while a trade-off between photosynthetic capacity at the leaf level and hydraulic function of xylem could be established across populations, it functions independently of the compromise between safety and efficiency of the hydraulic system with regard to water use at the interpopulation level.

  14. Isotropic Spin Trap EPR Spectra Simulation by Fast Fourier Transform (FFT)

    NASA Astrophysics Data System (ADS)

    Laachir, S.; Moussetad, M.; Adhiri, R.; Fahli, A.

    2005-03-01

    The detection and investigation of free radicals forming in living systems became possible due to the introduction of the method of spin traps. In this work, the electron spin resonance (ESR) spectra of DMPO/HO(.) and MGD-Fe-NO adducts are reproduced by simulation, based on the Fast Fourier Transform (FFT). The calculated spectral parameters as the hyperfine coupling constants, agree reasonably with the experimental data and the results are discussed.

  15. GARCH modelling in association with FFT-ARIMA to forecast ozone episodes

    NASA Astrophysics Data System (ADS)

    Kumar, Ujjwal; De Ridder, Koen

    2010-11-01

    In operational forecasting of the surface O 3 by statistical modelling, it is customary to assume the O 3 time series to be generated through a homoskedastic process. In the present work, we've taken heteroskedasticity of the O 3 time series explicitly into account and have shown how it resulted in O 3 forecasts with improved forecast confidence intervals. Moreover, it also enabled us to make more accurate probability forecasts of ozone episodes in the urban areas. The study has been conducted on daily maximum O 3 time series for four urban sites of two major European cities, Brussels and London. The sites are: Brussels (Molenbeek) (B1), Brussels (PARL.EUROPE) (B2), London (Brent) (L1) and London (Bloomsbury) (L2). Fast Fourier Transform (FFT) has been used to model the periodicities (annual periodicity is especially distinct) exhibited by the time series. The residuals of "actual data subtracted with their corresponding FFT component" exhibited stationarity and have been modelled using ARIMA (Autoregressive Integrated Moving Average) process. The MAPEs (Mean absolute percentage errors) using FFT-ARIMA for one day ahead 100 out of sample forecasts, were obtained as follows: 20%, 17.8%, 19.7% and 23.6% at the sites B1, B2, L1 and L2. The residuals obtained through FFT-ARIMA have been modelled using GARCH (Generalized Autoregressive Conditional Heteroskedastic) process. The conditional standard deviations obtained using GARCH have been used to estimate the improved forecast confidence intervals and to make probability forecasts of ozone episodes. At the sites B1, B2, L1 and L2, 91.3%, 90%, 70.6% and 53.8% of the times probability forecasts of ozone episodes (for one day ahead 30 out of sample) have correctly been made using GARCH as against 82.6%, 80%, 58.8% and 38.4% without GARCH. The incorporation of GARCH also significantly reduced the no. of false alarms raised by the models.

  16. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji

    2016-03-01

    Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer.

  17. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  18. Application to induction motor faults diagnosis of the amplitude recovery method combined with FFT

    NASA Astrophysics Data System (ADS)

    Liu, Yukun; Guo, Liwei; Wang, Qixiang; An, Guoqing; Guo, Ming; Lian, Hao

    2010-11-01

    This paper presents a signal processing method - amplitude recovery method (abbreviated to ARM) - that can be used as the signal pre-processing for fast Fourier transform (FFT) in order to analyze the spectrum of the other-order harmonics rather than the fundamental frequency in stator currents and diagnose subtle faults in induction motors. In this situation, the ARM functions as a filter that can filter out the component of the fundamental frequency from three phases of stator currents of the induction motor. The filtering result of the ARM can be provided to FFT to do further spectrum analysis. In this way, the amplitudes of other-order frequencies can be extracted and analyzed independently. If the FFT is used without the ARM pre-processing and the components of other-order frequencies, compared to the fundamental frequency, are fainter, the amplitudes of other-order frequencies are not able easily to extract out from stator currents. The reason is when the FFT is used direct to analyze the original signal, all the frequencies in the spectrum analysis of original stator current signal have the same weight. The ARM is capable of separating the other-order part in stator currents from the fundamental-order part. Compared to the existent digital filters, the ARM has the benefits, including its stop-band narrow enough just to stop the fundamental frequency, its simple operations of algebra and trigonometry without any integration, and its deduction direct from mathematics equations without any artificial adjustment. The ARM can be also used by itself as a coarse-grained diagnosis of faults in induction motors when they are working. These features can be applied to monitor and diagnose the subtle faults in induction motors to guard them from some damages when they are in operation. The diagnosis application of ARM combined with FFT is also displayed in this paper with the experimented induction motor. The test results verify the rationality and feasibility of the

  19. Terrain-Adaptive Navigation Architecture

    NASA Technical Reports Server (NTRS)

    Helmick, Daniel M.; Angelova, Anelia; Matthies, Larry H.; Helmick, Daniel M.

    2008-01-01

    A navigation system designed for a Mars rover has been designed to deal with rough terrain and/or potential slip when evaluating and executing paths. The system also can be used for any off-road, autonomous vehicles. The system enables vehicles to autonomously navigate different terrain challenges including dry river channel systems, putative shorelines, and gullies emanating from canyon walls. Several of the technologies within this innovation increase the navigation system s capabilities compared to earlier rover navigation algorithms.

  20. Can architecture be barbaric?

    PubMed

    Hürol, Yonca

    2009-06-01

    The title of this article is adapted from Theodor W. Adorno's famous dictum: 'To write poetry after Auschwitz is barbaric.' After the catastrophic earthquake in Kocaeli, Turkey on the 17th of August 1999, in which more than 40,000 people died or were lost, Necdet Teymur, who was then the dean of the Faculty of Architecture of the Middle East Technical University, referred to Adorno in one of his 'earthquake poems' and asked: 'Is architecture possible after 17th of August?' The main objective of this article is to interpret Teymur's question in respect of its connection to Adorno's philosophy with a view to make a contribution to the politics and ethics of architecture in Turkey. Teymur's question helps in providing a new interpretation of a critical approach to architecture and architectural technology through Adorno's philosophy. The paper also presents a discussion of Adorno's dictum, which serves for a better understanding of its universality/particularity.

  1. Non-uniform FFT for the finite element computation of the micromagnetic scalar potential

    NASA Astrophysics Data System (ADS)

    Exl, L.; Schrefl, T.

    2014-08-01

    We present a quasi-linearly scaling, first order polynomial finite element method for the solution of the magnetostatic open boundary problem by splitting the magnetic scalar potential. The potential is determined by solving a Dirichlet problem and evaluation of the single layer potential by a fast approximation technique based on Fourier approximation of the kernel function. The latter approximation leads to a generalization of the well-known convolution theorem used in finite difference methods. We address it by a non-uniform FFT approach. Overall, our method scales O(M+N+Nlog N) for N nodes and M surface triangles. We confirm our approach by several numerical tests.

  2. Analysis of fixed point FFT for Fourier domain optical coherence tomography systems.

    PubMed

    Ali, Murtaza; Parlapalli, Renuka; Magee, David P; Dasgupta, Udayan

    2009-01-01

    Optical coherence tomography (OCT) is a new imaging modality gaining popularity in the medical community. Its application includes ophthalmology, gastroenterology, dermatology etc. As the use of OCT increases, the need for portable, low power devices also increases. Digital signal processors (DSP) are well suited to meet the signal processing requirements of such a system. These processors usually operate on fixed precision. This paper analyzes the issues that a system implementer faces implementing signal processing algorithms on fixed point processor. Specifically, we show the effect of different fixed point precisions in the implementation of FFT on the sensitivity of Fourier domain OCT systems. PMID:19965018

  3. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  4. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  5. Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.

    PubMed

    Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju

    2016-01-01

    Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method.

  6. Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.

    PubMed

    Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju

    2016-01-01

    Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method. PMID:26626623

  7. Robust FFT-based scale-invariant image registration with image gradients.

    PubMed

    Tzimiropoulos, Georgios; Argyriou, Vasileios; Zafeiriou, Stefanos; Stathaki, Tania

    2010-10-01

    We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6. PMID:20479492

  8. FFT-split-operator code for solving the Dirac equation in 2+1 dimensions

    NASA Astrophysics Data System (ADS)

    Mocken, Guido R.; Keitel, Christoph H.

    2008-06-01

    provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 474 937 No. of bytes in distributed program, including test data, etc.: 4 128 347 Distribution format: tar.gz Programming language: C++ Computer: Any, but SMP systems are preferred Operating system: Linux and MacOS X are actively supported by the current version. Earlier versions were also tested successfully on IRIX and AIX Number of processors used: Generally unlimited, but best scaling with 2-4 processors for typical problems RAM: 160 Megabytes minimum for the examples given here Classification: 2.7 External routines: FFTW Library [3,4], Gnu Scientific Library [5], bzip2, bunzip2 Nature of problem: The relativistic time evolution of wave functions according to the Dirac equation is a challenging numerical task. Especially for an electron in the presence of high intensity laser beams and/or highly charged ions, this type of problem is of considerable interest to atomic physicists. Solution method: The code employs the split-operator method [1,2], combined with fast Fourier transforms (FFT) for calculating any occurring spatial derivatives, to solve the given problem. An autocorrelation spectral method [6] is provided to generate a bound state for use as the initial wave function of further dynamical studies. Restrictions: The code in its current form is restricted to problems in two spatial dimensions. Otherwise it is only limited by CPU time and memory that one can afford to spend on a particular problem. Unusual features: The code features dynamically adapting position and momentum space grids to keep execution time and memory requirements as small as possible. It employs an object-oriented approach, and it relies on a Clifford algebra class library to represent the mathematical objects of the Dirac formalism which we employ. Besides that it includes a feature (typically called "checkpointing") which allows the resumption of an

  9. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  10. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  11. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  12. High Capacity Method for Real-Time Audio Data Hiding Using the FFT Transform

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mehdi; Megías, David

    This paper presents a very efficient method for audio data hiding which is suitable for real-time applications. The FFT magnitudes which are in a band of frequencies between 5 and 15 kHz are modified slightly and the frequencies which have a magnitude less than a threshold are used for embedding. Its low complexity is one of the most important properties of this method making it appropriate for real-time applications. In addition, the suggested scheme is blind, since it does not need the original signal for extracting the hidden bits. The Experimental results show that it has a very good capacity (5 kbps), without significant perceptual distortion and provides robustness against MPEG compression (MP3).

  13. Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mehdi; Megías, David

    This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.

  14. Implementation of non-uniform FFT based Ewald summation in dissipative particle dynamics method

    NASA Astrophysics Data System (ADS)

    Wang, Yong-Lei; Laaksonen, Aatto; Lu, Zhong-Yuan

    2013-02-01

    The ENUF method, i.e., Ewald summation based on the non-uniform FFT technique (NFFT), is implemented in dissipative particle dynamics (DPD) simulation scheme to fast and accurately calculate the electrostatic interactions at mesoscopic level. In a simple model electrolyte system, the suitable ENUF-DPD parameters, including the convergence parameter α, the NFFT approximation parameter p, and the cut-offs for real and reciprocal space contributions, are carefully determined. With these optimized parameters, the ENUF-DPD method shows excellent efficiency and scales as O(NlogN). The ENUF-DPD method is further validated by investigating the effects of charge fraction of polyelectrolyte, ionic strength and counterion valency of added salts on polyelectrolyte conformations. The simulations in this paper, together with a separately published work of dendrimer-membrane complexes, show that the ENUF-DPD method is very robust and can be used to study charged complex systems at mesoscopic level.

  15. STS-26 Commander Hauck during egress training in JSC's MAIL Bldg 9A FFT

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, Commander Frederick H. Hauck, wearing launch and entry suit (LES) and launch and entry helmet (LEH), egresses the Full Fuselage Trainer (FFT) via the new crew escape system (CES) slide inflated at the open side hatch. Technicians stand on either side of the slide ready to help Hauck to his feet when he reaches the bottom. The emergency egress training was held in JSC's Shuttle Mockup and Integration Laboratory (MAIL) Bldg 9A. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (LESs) and checked out the crew escape system (CES) slide and other CES configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  16. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Looking on at right are Astronaut Office Chief Daniel C. Brandenstein (standing) and astronaut James P. Bagian. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options.

  17. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Astronaut Steven R. Nagel (left), positioned in the open side hatch, briefs the crew on the pole escape system as he demonstrates some related equipment. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  18. Mixed boundary conditions for FFT-based homogenization at finite strains

    NASA Astrophysics Data System (ADS)

    Kabel, Matthias; Fliegener, Sascha; Schneider, Matti

    2016-02-01

    In this article we introduce a Lippmann-Schwinger formulation for the unit cell problem of periodic homogenization of elasticity at finite strains incorporating arbitrary mixed boundary conditions. Such problems occur frequently, for instance when validating computational results with tensile tests, where the deformation gradient in loading direction is fixed, as is the stress in the corresponding orthogonal plane. Previous Lippmann-Schwinger formulations involving mixed boundary can only describe tensile tests where the vector of applied force is proportional to a coordinate direction. Utilizing suitable orthogonal projectors we develop a Lippmann-Schwinger framework for arbitrary mixed boundary conditions. The resulting fixed point and Newton-Krylov algorithms preserve the positive characteristics of existing FFT-algorithms. We demonstrate the power of the proposed methods with a series of numerical examples, including continuous fiber reinforced laminates and a complex nonwoven structure of a long fiber reinforced thermoplastic, resulting in a speed-up of some computations by a factor of 1000.

  19. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. PMID:24038140

  20. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  1. A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems

    NASA Astrophysics Data System (ADS)

    Chan, Tsun-Shan; Kuo, Jen-Chih; Wu, An-Yeu (Andy)

    2002-12-01

    The discrete multitone (DMT) modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL) and very-high-speed digital subscriber lines (VDSL). Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs). In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.

  2. Analog implementation of radix-2, 16-FFT processor for OFDM receivers: non-linearity behaviours and system performance analysis

    NASA Astrophysics Data System (ADS)

    Mokhtarian, N.; Hodtani, G. A.

    2015-12-01

    Analog implementations of decoders have been widely studied by considering circuit complexity, as well as power and speed, and their integration with other analog blocks is an extension of analog decoding research. In the front-end blocks of orthogonal frequency-division multiplexing (OFDM) systems, combination of an analog fast Fourier transform (FFT) with an analog decoder is suitable. In this article, the implementation of a 16-symbol FFT processor based on analog complementary metal-oxide-semiconductor current mirrors within circuit and system levels is presented, and the FFT is implemented using a butterfly diagram, where each node is implemented using analog circuits. Implementation details include consideration of effects of transistor mismatch and inherent noises and effects of circuit non-linearity in OFDM system performance. It is shown that not only can transistor inherent noises be measured but also transistor mismatch can be applied as an input-referred noise source that can be used in system- and circuit-level studies. Simulations of a radix-2, 16-symbol FFT show that proposed circuits consume very low power, and impacts of noise, mismatch and non-linearity for each node of this processor are very small.

  3. An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients

    NASA Astrophysics Data System (ADS)

    Moser, Steven; Lee, Peter; Podoleanu, Adrian

    2015-04-01

    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.

  4. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  5. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  6. [Investigation of "reading" with FFT analysis of the beta waves in EEG during "rapid-reading"].

    PubMed

    Yokoyama, S

    1992-06-01

    The beta waves in electroencephalograms (EEGs) during "reading" were investigated by means of the Fast Fourier Transform (FFT) analysis method. The subjects who had mastered a "rapid-reading" method, were classified into two groups according to their rapid-reading achievement level: the "under-trained readers" (male 5, female 2) had already acquired the ability of smooth eye movement, but continued to use the mental phonetic process while reading. The "well-trained readers" (male 6, female 2) could understand the contents of the text without resorting to such phonetic process. All subjects were righthanded. The tasks were designed to eliminate the artifact of the eye movement and the electromyogram as much as possible. The EEGs were recorded with twelve channels of the international standard 10-20 electrode system. The relative power value (R.P.V.) was calculated as follows: RPV(%) = [(X-C)/N] x 100(%) where X is the beta 1 or beta 2 power value by FFT analysis while doing tasks, C is the power value while doing control tasks, and N is the value in the resting state with the eyes open. The results were statistically analyzed by paired t-test. The following results were obtained: In both groups; (a) the left angular gyrus was usually activate during the rapid-reading; (b) the Wernicke's center was activated only during reading with the phonetic process; (c) in the well-trained readers the activation of the right visual cortex was associated with some visual imaging during the rapid-reading; (d) in the under-trained readers, the association between the activation of the central frontal area and the rapid-reading was observed. Thus, the following model for "reading" was obtained: Two parallel pathways seem important in the processing of the verbal information presented visually; one relates only to the left angular gyrus where the visual-verbal information is processed directly, and the other relates to the interactive pathway between the left angular gyrus and the

  7. The Unified-FFT Method for Fast Solution of Integral Equations as Applied to Shielded-Domain Electromagnetic

    NASA Astrophysics Data System (ADS)

    Rautio, Brian

    Electromagnetic (EM) solvers are widely used within computer-aided design (CAD) to improve and ensure success of circuit designs. Unfortunately, due to the complexity of Maxwell's equations, they are often computationally expensive. While considerable progress has been made in the realm of speed-enhanced EM solvers, these fast solvers generally achieve their results through methods that introduce additional error components by way of geometric approximations, sparse-matrix approximations, multilevel decomposition of interactions, and more. This work introduces the new method, Unified-FFT (UFFT). A derivative of method of moments, UFFT scales as O(N log N), and achieves fast analysis by the unique combination of FFT-enhanced matrix fill operations (MFO) with FFT-enhanced matrix solve operations (MSO). In this work, two versions of UFFT are developed, UFFT-Precorrected (UFFT-P) and UFFT-Grid Totalizing (UFFT-GT). UFFT-P uses precorrected FFT for MSO and allows the use of basis functions that do not conform to a regular grid. UFFT-GT uses conjugate gradient FFT for MSO and features the capability of reducing the error of the solution down to machine precision. The main contribution of UFFT-P is a fast solver, which utilizes FFT for both MFO and MSO. It is demonstrated in this work to not only provide simulation results for large problems considerably faster than state of the art commercial tools, but also to be capable of simulating geometries which are too complex for conventional simulation. In UFFT-P these benefits come at the expense of a minor penalty to accuracy. UFFT-GT contains further contributions as it demonstrates that such a fast solver can be accurate to numerical precision as compared to a full, direct analysis. It is shown to provide even more algorithmic efficiency and faster performance than UFFT-P. UFFT-GT makes an additional contribution in that it is developed not only for planar geometries, but also for the case of multilayered dielectrics and

  8. Improving situation awareness using a hub architecture for friendly force tracking

    NASA Astrophysics Data System (ADS)

    Karkkainen, Anssi P.

    2010-04-01

    Situation Awareness (SA) is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their future status. In a military environment the most critical elements to be tracked are followed elements are either friendly or hostile forces. Poor knowledge of locations of friendly forces easily leads into the situation in which the troops could be under firing by own troops or in which decisions in a command and control system are based on incorrect tracking. Thus the Friendly Force Tracking (FFT) is a vital part of building situation awareness. FFT is basically quite simple in theory; collected tracks are shared through the networks to all troops. In real world, the situation is not so clear. Poor communication capabilities, lack of continuous connectivity n and large number of user on different level provide high requirements for FFT systems. In this paper a simple architecture for Friendly Force Tracking is presented. The architecture is based on NFFI (NATO Friendly Force Information) hubs which have two key features; an ability to forward tracking information and an ability to convert information into the desired format. The hub based approach provides a lightweight and scalable solution, which is able to use several types of communication media (GSM, tactical radios, TETRA etc.). The system is also simple to configure and maintain. One main benefit of the proposed architecture is that it is independent on a message format. It communicates using NFFI messages, but national formats are also allowed.

  9. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  10. Detection of apnea using a short-window FFT technique and an artificial neural network

    NASA Astrophysics Data System (ADS)

    Waldemark, Karina E.; Agehed, Kenneth I.; Lindblad, Thomas; Waldemark, Joakim T. A.

    1998-03-01

    Sleep apnea is characterized by frequent prolonged interruptions of breathing during sleep. This syndrome causes severe sleep disorders and is often responsible for development of other diseases such as heart problems, high blood pressure and daytime fatigue, etc. After diagnosis, sleep apnea is often successfully treated by applying positive air pressure (CPAP) to the mouth and nose. Although effective, the (CPAP) equipment takes up a lot of space and the connected mask causes a lot of inconvenience for the patients. This raised interest in developing new techniques for treatment of sleep apnea syndrome. Several studies have indicated that electrical stimulation of the hypoglossal nerve and muscle in the tongue may be a useful method for treating patients with severe sleep apnea. In order to be able to successfully prevent the occurrence of apnea it is necessary to have some technique for early and fast on-line detection or prediction of the apnea events. This paper suggests using measurements of respiratory airflow (mouth temperature). The signal processing for this task includes the use of a short window FFT technique and uses an artificial back propagation neural net to model or predict the occurrence of apneas. The results show that early detection of respiratory interruption is possible and that the delay time for this is small.

  11. Ambient modal identification of a primary-secondary structure by Fast Bayesian FFT method

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui; Zhang, Feng-Liang

    2012-04-01

    The Mong Man Wai Building is a seven-storied reinforced concrete structure situated on the campus of the City University of Hong Kong. On its roof a two-storied steel frame has been recently constructed to host a new wind tunnel laboratory. The roof frame and the main building form a primary-secondary structure. The dynamic characteristics of the resulting system are of interest from a structural dynamics point of view. This paper presents work on modal identification of the structure using ambient vibration measurement. An array of tri-axial acceleration data has been obtained using a number of setups to cover all locations of interest with a limited number of sensors. Modal identification is performed using a recently developed Fast Bayesian FFT method. In addition to the most probable modal properties, their posterior uncertainties can also be assessed using the method. The posterior uncertainty of mode shape is assessed by the expected value of the Modal Assurance Criteria (MAC) of the most probable mode shape with a random mode shape consistent with the posterior distribution. The mode shapes of the overall structural system are obtained by assembling those from individual setups using a recently developed least-square method. The identification results reveal a number of interesting features about the structural system and provide important information defining the baseline modal properties of the building. Practical interpretation of the statistics of modal parameters calculated from frequentist and Bayesian context is also discussed.

  12. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  13. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  14. A novel FFT/IFFT based peak-to-average power reduction method for OFDM communication systems using tone reservation

    NASA Astrophysics Data System (ADS)

    Besong, Samuel Oru; Yu, Xiaoyou; Li, Bin; Hou, Weibing; Wang, Xiaochun

    2011-10-01

    One of the main drawbacks of OFDM systems is the high Peak-to-Average Power ratio, which could limit transmission efficiency and efficient use of HPA. In this paper we present a modified tone reservation scheme for PAPR reduction using FFT iterations to generate the tones. In this Scheme, the reserve tones are designed to both cancel peaks and slightly increase the average power to induce a better PAPR reduction..The tones are generated by means of 2 FFT operations and the process is sometimes iterated to achieve better PAPR reductions. This scheme achieves a significant PAPR reduction of at least 4.6dB when about 4% of the carriers are used as reserve tones and with even lesser iterations when simulated in an OFDM system.

  15. The use of the FFT for the efficient solution of the problem of electromagnetic scattering by a body of revolution

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Mittra, Raj

    1990-01-01

    The enhancement of the computational efficiency of the body of revolution (BOR) scattering problem is discused with a view to making it practical for solving large-body problems. The problem of EM scattering by a perfectly conducting BOR is considered, although the methods can be extended to multilayered dielectric bodies as well. Typically, the generation of the elements of the moment method matrix consumes a major portion of the computational time. It is shown how this time can be significantly reduced by manipulating the expression for the matrix elements to permit efficient FFT computation. A technique for extracting the singularity of the Green function that appears within the integrands of the matrix diagonal is also presented, further enhancing the usefulness of the FFT. The computation time can thus be improved by at least an order of magnitude for large bodies in comparison to that for previous algorithms.

  16. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Fujisawa, A.; Nagashima, Y.; Hamasaki, M.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; QUEST Team

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis.

  17. Textural analyses of carbon fiber materials by 2D-FFT of complex images obtained by high frequency eddy current imaging (HF-ECI)

    NASA Astrophysics Data System (ADS)

    Schulze, Martin H.; Heuer, Henning

    2012-04-01

    Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.

  18. Experimental Architecture.

    ERIC Educational Resources Information Center

    Alter, Kevin

    2003-01-01

    Describes the design of the Centre for Architectural Structures and Technology at the University of Manitoba, including the educational context and design goals. Includes building plans and photographs. (EV)

  19. Geometric super-resolution via log-polar FFT image registration and variable pixel linear reconstruction

    NASA Astrophysics Data System (ADS)

    Crabtree, Peter N.; Murray-Krezan, Jeremy

    2011-09-01

    Various image de-aliasing techniques and algorithms have been developed to improve the resolution of pixel-limited imagery acquired by an optical system having an undersampled point spread function. These techniques are sometimes referred to as multi-frame or geometric super-resolution, and are valuable tools because they maximize the imaging utility of current and legacy focal plane array (FPA) technology. This is especially true for infrared FPAs which tend to have larger pixels as compared to visible sensors. Geometric super-resolution relies on knowledge of subpixel frame-toframe motion, which is used to assemble a set of low-resolution frames into one or more high-resolution (HR) frames. Log-polar FFT image registration provides a straightforward and relatively fast approach to estimate global affine motion, including translation, rotation, and uniform scale changes. This technique is also readily extended to provide subpixel translation estimates, and is explored for its potential combination with variable pixel linear reconstruction (VPLR) to apportion a sequence of LR frames onto a HR grid. The VPLR algorithm created for this work is described, and HR image reconstruction is demonstrated using calibrated 1/4 pixel microscan data. The HR image resulting from VPLR is also enhanced using Lucy-Richardson deconvolution to mitigate blurring effects due to the pixel spread function. To address non-stationary scenes, image warping, and variable lighting conditions, optical flow is also investigated for its potential to provide subpixel motion information. Initial results demonstrate that the particular optical flow technique studied is able to estimate shifts down to nearly 1/10th of a pixel, and possibly smaller. Algorithm performance is demonstrated and explored using laboratory data from visible cameras.

  20. Arabidopsis pab1, a mutant with reduced anthocyanins in immature seeds from banyuls, harbors a mutation in the MATE transporter FFT.

    PubMed

    Kitamura, Satoshi; Oono, Yutaka; Narumi, Issay

    2016-01-01

    Forward genetics approaches have helped elucidate the anthocyanin biosynthetic pathway in plants. Here, we used the Arabidopsis banyuls (ban) mutant, which accumulates anthocyanins, instead of colorless proanthocyanidin precursors, in immature seeds. In contrast to standard screens for mutants lacking anthocyanins in leaves/stems, we mutagenized ban plants and screened for mutants showing differences in pigmentation of immature seeds. The pale banyuls1 (pab1) mutation caused reduced anthocyanin pigmentation in immature seeds compared with ban. Immature pab1 ban seeds contained less anthocyanins and flavonols than ban, but showed normal expression of anthocyanin biosynthetic genes. In contrast to pab1, introduction of a flavonol-less mutation into ban did not produce paler immature seeds. Map-based cloning showed that two independent pab1 alleles disrupted the MATE-type transporter gene FFT/DTX35. Complementation of pab1 with FFT confirmed that mutation in FFT causes the pab1 phenotype. During development, FFT promoter activity was detected in the seed-coat layers that accumulate flavonoids. Anthocyanins accumulate in the vacuole and FFT fused to GFP mainly localized in the vacuolar membrane. Heterologous expression of grapevine MATE-type anthocyanin transporter gene partially complemented the pab1 phenotype. These results suggest that FFT acts at the vacuolar membrane in anthocyanin accumulation in the Arabidopsis seed coat, and that our screening strategy can reveal anthocyanin-related genes that have not been found by standard screening.

  1. Bentho-Pelagic Divergence of Cichlid Feeding Architecture Was Prodigious and Consistent during Multiple Adaptive Radiations within African Rift-Lakes

    PubMed Central

    Cooper, W. James; Parsons, Kevin; McIntyre, Alyssa; Kern, Brittany; McGee-Moore, Alana; Albertson, R. Craig

    2010-01-01

    Background How particular changes in functional morphology can repeatedly promote ecological diversification is an active area of evolutionary investigation. The African rift-lake cichlids offer a calibrated time series of the most dramatic adaptive radiations of vertebrate trophic morphology yet described, and the replicate nature of these events provides a unique opportunity to test whether common changes in functional morphology have repeatedly facilitated their ecological success. Methodology/Principal Findings Specimens from 87 genera of cichlid fishes endemic to Lakes Tanganyka, Malawi and Victoria were dissected in order to examine the functional morphology of cichlid feeding. We quantified shape using geometric morphometrics and compared patterns of morphological diversity using a series of analytical tests. The primary axes of divergence were conserved among all three radiations, and the most prevalent changes involved the size of the preorbital region of the skull. Even the fishes from the youngest of these lakes (Victoria), which exhibit the lowest amount of skull shape disparity, have undergone extensive preorbital evolution relative to other craniofacial traits. Such changes have large effects on feeding biomechanics, and can promote expansion into a wide array of niches along a bentho-pelagic ecomorphological axis. Conclusions/Significance Here we show that specific changes in trophic anatomy have evolved repeatedly in the African rift lakes, and our results suggest that simple morphological alterations that have large ecological consequences are likely to constitute critical components of adaptive radiations in functional morphology. Such shifts may precede more complex shape changes as lineages diversify into unoccupied niches. The data presented here, combined with observations of other fish lineages, suggest that the preorbital region represents an evolutionary module that can respond quickly to natural selection when fishes colonize new lakes

  2. Architectural principles for the design of wide band image analysis systems

    SciTech Connect

    Bruning, U.; Giloi, W.K.; Liedtke, C.E.

    1983-01-01

    To match an image-analysis system appropriately to the multistage nature of image analysis, the system should have: (1) an overall system architecture made up of several dedicated SIMD coprocessors connected through a bottleneck-free, high-speed communication structure; (2) data-structure types in hardware; and (3) a conventional computer for executing operating-system functions and application programs. Coprocessors may exist specifically for local image processing, FFT, list processing, and vector processing in general. All functions must be transparent to the user. The architectural principles of such a system and the policies and mechanisms for its realization are exemplified. 4 references.

  3. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  4. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  5. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  6. Space Telecommunications Radio Architecture (STRS)

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  7. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  8. An integrated pipeline of open source software adapted for multi-CPU architectures: use in the large-scale identification of single nucleotide polymorphisms.

    PubMed

    Jayashree, B; Hanspal, Manindra S; Srinivasan, Rajgopal; Vigneshwaran, R; Varshney, Rajeev K; Spurthi, N; Eshwar, K; Ramesh, N; Chandra, S; Hoisington, David A

    2007-01-01

    The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  9. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  10. An Architecture to Enable Future Sensor Webs

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Caffrey, Robert; Frye, Stu; Grosvenor, Sandra; Hess, Melissa; Chien, Steve; Sherwood, Rob; Davies, Ashley; Hayden, Sandra; Sweet, Adam

    2004-01-01

    A sensor web is a coherent set of distributed 'nodes', interconnected by a communications fabric, that collectively behave as a single dynamic observing system. A 'plug and play' mission architecture enables progressive mission autonomy and rapid assembly and thereby enables sensor webs. This viewgraph presentation addresses: Target mission messaging architecture; Strategy to establish architecture; Progressive autonomy with onboard sensor web; EO-1; Adaptive array antennas (smart antennas) for satellite ground stations.

  11. More About Architecture For Intelligent Robotic Control

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Chang, Jeffrey

    1992-01-01

    Boolean neural networks proposed to implement part of intermediate level of hierarchical architecture of control system for artificially intelligent control of robot hand. Concept described in "Architecture for Intelligent Control of Robotic Tasks" (NPO-17871). Rule level of architecture implemented in two Boolean neural networks operated and updated in alternation. No explicit programming of network. Internal configuration not unique but, depends on initial state and history of previous adaptations. Accepts new rules sequentially presented by external controller.

  12. Class Architecture.

    ERIC Educational Resources Information Center

    Crosbie, Michael J.

    This compendium contains more than 40 schools that show new directions in design and the changing demands on this building type. It discusses the design challenges in new schools and how each one of the projects meets the demands of an architecture for learning. An introduction by architect Raymond Bordwell explains many of the trends in new…

  13. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  14. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  15. Architectural Models

    ERIC Educational Resources Information Center

    Levenson, Harold E.; Hurni, Andre

    1978-01-01

    Suggests building models as a way to reinforce and enhance related subjects such as architectural drafting, structural carpentry, etc., and discusses time, materials, scales, tools or equipment needed, how to achieve realistic special effects, and the types of projects that can be built (model of complete building, a panoramic model, and model…

  16. [The architectural design of psychiatric care buildings].

    PubMed

    Dunet, Lionel

    2012-01-01

    The architectural design of psychiatric care buildings. In addition to certain "classic" creations, the Dunet architectural office has designed several units for difficult patients as well as a specially adapted hospitalisation unit. These creations which are demanding in terms of the organisation of care require close consultation with the nursing teams. Testimony of an architect who is particularly engaged in the universe of psychiatry.

  17. Changing School Architecture in Zurich

    ERIC Educational Resources Information Center

    Ziegler, Mark; Kurz, Daniel

    2008-01-01

    Changes in the way education is delivered has contributed to the evolution of school architecture in Zurich, Switzerland. The City of Zurich has revised its guidelines for designing school buildings, both new and old. Adapting older buildings to today's needs presents a particular challenge. The authors explain what makes up a good school building…

  18. Shaping plant architecture

    PubMed Central

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  19. Shaping plant architecture.

    PubMed

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  20. Complexity and Performance Results for Non FFT-Based Univariate Polynomial Multiplication

    NASA Astrophysics Data System (ADS)

    Chowdhury, Muhammad F. I.; Maza, Marc Moreno; Pan, Wei; Schost, Eric

    2011-11-01

    Today's parallel hardware architectures and computer memory hierarchies enforce revisiting fundamental algorithms which were often designed with algebraic complexity as the main complexity measure and with sequential running time as the main performance counter. This study is devoted to two algorithms of univariate polynomial multiplication; that are independent of the coefficient ring: the plain and the Toom-Cook univariate multiplications. We analyze their cache complexity and report on their parallel implementations in Cilk++ [1].

  1. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  2. Investigation of hidden periodic structures on SEM images of opal-like materials using FFT and IFFT.

    PubMed

    Stephant, Nicolas; Rondeau, Benjamin; Gauthier, Jean-Pierre; Cody, Jason A; Fritsch, Emmanuel

    2014-01-01

    We have developed a method to use fast Fourier transformation (FFT) and inverse fast Fourier transformation (IFFT) to investigate hidden periodic structures on SEM images. We focused on samples of natural, play-of-color opals that diffract visible light and hence are periodically structured. Conventional sample preparation by hydrofluoric acid etch was not used; untreated, freshly broken surfaces were examined at low magnification relative to the expected period of the structural features, and, the SEM was adjusted to get a very high number of pixels in the images. These SEM images were treated by software to calculate autocorrelation, FFT, and IFFT. We present how we adjusted SEM acquisition parameters for best results. We first applied our procedure on an SEM image on which the structure was obvious. Then, we applied the same procedure on a sample that must contain a periodic structure because it diffracts visible light, but on which no structure was visible on the SEM image. In both cases, we obtained clearly periodic patterns that allowed measurements of structural parameters. We also investigated how the irregularly broken surface interfered with the periodic structure to produce additional periodicity. We tested the limits of our methodology with the help of simulated images. PMID:24752811

  3. A comparative study on low-memory iterative solvers for FFT-based homogenization of periodic media

    NASA Astrophysics Data System (ADS)

    Mishra, Nachiketa; Vondřejc, Jaroslav; Zeman, Jan

    2016-09-01

    In this paper, we assess the performance of four iterative algorithms for solving non-symmetric rank-deficient linear systems arising in the FFT-based homogenization of heterogeneous materials defined by digital images. Our framework is based on the Fourier-Galerkin method with exact and approximate integrations that has recently been shown to generalize the Lippmann-Schwinger setting of the original work by Moulinec and Suquet from 1994. It follows from this variational format that the ensuing system of linear equations can be solved by general-purpose iterative algorithms for symmetric positive-definite systems, such as the Richardson, the Conjugate gradient, and the Chebyshev algorithms, that are compared here to the Eyre-Milton scheme - the most efficient specialized method currently available. Our numerical experiments, carried out for two-dimensional elliptic problems, reveal that the Conjugate gradient algorithm is the most efficient option, while the Eyre-Milton method performs comparably to the Chebyshev semi-iteration. The Richardson algorithm, equivalent to the still widely used original Moulinec-Suquet solver, exhibits the slowest convergence. Besides this, we hope that our study highlights the potential of the well-established techniques of numerical linear algebra to further increase the efficiency of FFT-based homogenization methods.

  4. Density measurement of yarn dyed woven fabrics based on dual-side scanning and the FFT technique

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Xin, Binjie; Wu, Xiangji

    2014-11-01

    The yarn density measurement, as part of fabric analysis, is very important for the textile manufacturing process and is traditionally based on single-side analysis. In this paper, a new method, suitable for yarn dyed woven fabrics, is developed, based on dual-side scanning and the fast Fourier transform (FFT) technique for yarn density measurement, instead of one-side image analysis. Firstly, the dual-side scanning method based on the Radon transform (RT) is used for the image registration of both side images of the woven fabric; a lab-used imaging system is established to capture the images of each side. Secondly, the merged image from the dual-side fabric images can be generated using three self-developed image fusion methods. Thirdly, the yarn density can be measured based on the merged image using FFT and inverse fast Fourier transform (IFFT) processing. The effects of yarn color and weave pattern on the density measurement have been investigated for the optimization of the proposed method. Our experimental results show that the proposed method works better than the conventional analysis method in terms of both the accuracy and robustness.

  5. Investigation of hidden periodic structures on SEM images of opal-like materials using FFT and IFFT.

    PubMed

    Stephant, Nicolas; Rondeau, Benjamin; Gauthier, Jean-Pierre; Cody, Jason A; Fritsch, Emmanuel

    2014-01-01

    We have developed a method to use fast Fourier transformation (FFT) and inverse fast Fourier transformation (IFFT) to investigate hidden periodic structures on SEM images. We focused on samples of natural, play-of-color opals that diffract visible light and hence are periodically structured. Conventional sample preparation by hydrofluoric acid etch was not used; untreated, freshly broken surfaces were examined at low magnification relative to the expected period of the structural features, and, the SEM was adjusted to get a very high number of pixels in the images. These SEM images were treated by software to calculate autocorrelation, FFT, and IFFT. We present how we adjusted SEM acquisition parameters for best results. We first applied our procedure on an SEM image on which the structure was obvious. Then, we applied the same procedure on a sample that must contain a periodic structure because it diffracts visible light, but on which no structure was visible on the SEM image. In both cases, we obtained clearly periodic patterns that allowed measurements of structural parameters. We also investigated how the irregularly broken surface interfered with the periodic structure to produce additional periodicity. We tested the limits of our methodology with the help of simulated images.

  6. Modular robotic architecture

    NASA Astrophysics Data System (ADS)

    Smurlo, Richard P.; Laird, Robin T.

    1991-03-01

    The development of control architectures for mobile systems is typically a task undertaken with each new application. These architectures address different operational needs and tend to be difficult to adapt to more than the problem at hand. The development of a flexible and extendible control system with evolutionary growth potential for use on mobile robots will help alleviate these problems and if made widely available will promote standardization and cornpatibility among systems throughout the industry. The Modular Robotic Architecture (MRA) is a generic control systern that meets the above needs by providing developers with a standard set of software hardware tools that can be used to design modular robots (MODBOTs) with nearly unlimited growth potential. The MODBOT itself is a generic creature that must be customized by the developer for a particular application. The MRA facilitates customization of the MODBOT by providing sensor actuator and processing modules that can be configured in almost any manner as demanded by the application. The Mobile Security Robot (MOSER) is an instance of a MODBOT that is being developed using the MRA. Navigational Sonar Module RF Link Control Station Module hR Link Detection Module Near hR Proximi Sensor Module Fluxgate Compass and Rate Gyro Collision Avoidance Sonar Module Figure 1. Remote platform module configuration of the Mobile Security Robot (MOSER). Acoustical Detection Array Stereoscopic Pan and Tilt Module High Level Processing Module Mobile Base 566

  7. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks

  8. Adaptive Controller for Compact Fourier Transform Spectrometer with Space Applications

    NASA Astrophysics Data System (ADS)

    Keymeulen, D.; Yiu, P.; Berisford, D. F.; Hand, K. P.; Carlson, R. W.; Conroy, M.

    2014-12-01

    Here we present noise mitigation techniques developed as part of an adaptive controller for a very compact Compositional InfraRed Interferometric Spectrometer (CIRIS) implemented on a stand-alone field programmable gate array (FPGA) architecture with emphasis on space applications in high radiation environments such as Europa. CIRIS is a novel take on traditional Fourier Transform Spectrometers (FTS) and replaces linearly moving mirrors (characteristic of Michelson interferometers) with a constant-velocity rotating refractor to variably phase shift and alter the path length of incoming light. The design eschews a monochromatic reference laser typically used for sampling clock generation and instead utilizes constant time-sampling via internally generated clocks. This allows for a compact and robust device, making it ideal for spaceborne measurements in the near-IR to thermal-IR band (2-12 µm) on planetary exploration missions. The instrument's embedded microcontroller is implemented on a VIRTEX-5 FPGA and a PowerPC with the aim of sampling the instrument's detector and optical rotary encoder in order to construct interferograms. Subsequent onboard signal processing provides spectral immunity from the noise effects introduced by the compact design's removal of a reference laser and by the radiation encountered during space flight to destinations such as Europa. A variety of signal processing techniques including resampling, radiation peak removal, Fast Fourier Transform (FFT), spectral feature alignment, dispersion correction and calibration processes are applied to compose the sample spectrum in real-time with signal-to-noise-ratio (SNR) performance comparable to laser-based FTS designs in radiation-free environments. The instrument's FPGA controller is demonstrated with the FTS to characterize its noise mitigation techniques and highlight its suitability for implementation in space systems.

  9. On the Inevitable Intertwining of Requirements and Architecture

    NASA Astrophysics Data System (ADS)

    Sutcliffe, Alistair

    The chapter investigates the relationship between architecture and requirements, arguing that architectural issues need to be addressed early in the RE process. Three trends are driving architectural implications for RE: the growth of intelligent, context-aware and adaptable systems. First the relationship between architecture and requirements is considered from a theoretical viewpoint of problem frames and abstract conceptual models. The relationships between architectural decisions and non-functional requirements is reviewed, and then the impact of architecture on the RE process is assessed using a case study of developing configurable, semi-intelligent software to support medical researchers in e-science domains.

  10. FFT integration of instantaneous 3D pressure gradient fields measured by Lagrangian particle tracking in turbulent flows

    NASA Astrophysics Data System (ADS)

    Huhn, F.; Schanz, D.; Gesemann, S.; Schröder, A.

    2016-09-01

    Pressure gradient fields in unsteady flows can be estimated through flow measurements of the material acceleration in the fluid and the assumption of the governing momentum equation. In order to derive pressure from its gradient, almost exclusively two numerical methods have been used to spatially integrate the pressure gradient until now: first, direct path integration in the spatial domain, and second, the solution of the Poisson equation for pressure. Instead, we propose an alternative third method that integrates the pressure gradient field in Fourier space. Using a FFT function, the method is fast and easy to implement in programming languages for scientific computing. We demonstrate the accuracy of the integration scheme on a synthetic pressure field and apply it to an experimental example based on time-resolved material acceleration data from high-resolution Lagrangian particle tracking with the Shake-The-Box method.

  11. Lab architecture

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  12. Evolution of genome architecture.

    PubMed

    Koonin, Eugene V

    2009-02-01

    Charles Darwin believed that all traits of organisms have been honed to near perfection by natural selection. The empirical basis underlying Darwin's conclusions consisted of numerous observations made by him and other naturalists on the exquisite adaptations of animals and plants to their natural habitats and on the impressive results of artificial selection. Darwin fully appreciated the importance of heredity but was unaware of the nature and, in fact, the very existence of genomes. A century and a half after the publication of the "Origin", we have the opportunity to draw conclusions from the comparisons of hundreds of genome sequences from all walks of life. These comparisons suggest that the dominant mode of genome evolution is quite different from that of the phenotypic evolution. The genomes of vertebrates, those purported paragons of biological perfection, turned out to be veritable junkyards of selfish genetic elements where only a small fraction of the genetic material is dedicated to encoding biologically relevant information. In sharp contrast, genomes of microbes and viruses are incomparably more compact, with most of the genetic material assigned to distinct biological functions. However, even in these genomes, the specific genome organization (gene order) is poorly conserved. The results of comparative genomics lead to the conclusion that the genome architecture is not a straightforward result of continuous adaptation but rather is determined by the balance between the selection pressure, that is itself dependent on the effective population size and mutation rate, the level of recombination, and the activity of selfish elements. Although genes and, in many cases, multigene regions of genomes possess elaborate architectures that ensure regulation of expression, these arrangements are evolutionarily volatile and typically change substantially even on short evolutionary scales when gene sequences diverge minimally. Thus, the observed genome

  13. Simulation of a Reconfigurable Adaptive Control Architecture

    NASA Astrophysics Data System (ADS)

    Rapetti, Ryan John

    A set of algorithms and software components are developed to investigate the use of a priori models of damaged aircraft to improve control of similarly damaged aircraft. An addition to Model Predictive Control called state trajectory extrapolation is also developed to deliver good handling qualities in nominal an off-nominal aircraft. System identification algorithms are also used to improve model accuracy after a damage event. Simulations were run to demonstrate the efficacy of the algorithms and software components developed herein. The effect of model order on system identification convergence and performance is also investigated. A feasibility study for flight testing is also conducted. A preliminary hardware prototype was developed, as was the necessary software to integrate the avionics and ground station systems. Simulation results show significant improvement in both tracking and cross-coupling performance when a priori control models are used, and further improvement when identified models are used.

  14. An efficient hybrid MLFMA-FFT solver for the volume integral equation in case of sparse 3D inhomogeneous dielectric scatterers

    SciTech Connect

    Zaeytijd, J. de Bogaert, I.; Franchois, A.

    2008-07-01

    Electromagnetic scattering problems involving inhomogeneous objects can be numerically solved by applying a Method of Moments discretization to the volume integral equation. For electrically large problems, the iterative solution of the resulting linear system is expensive, both computationally and in memory use. In this paper, a hybrid MLFMA-FFT method is presented, which combines the fast Fourier transform (FFT) method and the High Frequency Multilevel Fast Multipole Algorithm (MLFMA) in order to reduce the cost of the matrix-vector multiplications needed in the iterative solver. The method represents the scatterers within a set of possibly disjoint identical cubic subdomains, which are meshed using a uniform cubic grid. This specific mesh allows for the application of FFTs to calculate the near interactions in the MLFMA and reduces the memory cost considerably, since the aggregation and disaggregation matrices of the MLFMA can be reused. Additional improvements to the general MLFMA framework, such as an extention of the FFT interpolation scheme of Sarvas et al. from the scalar to the vectorial case in combination with a more economical representation of the radiation patterns on the lowest level in vector spherical harmonics, are proposed and the choice of the subdomain size is discussed. The hybrid method performs better in terms of speed and memory use on large sparse configurations than both the FFT method and the HF MLFMA separately and it has lower memory requirements on general large problems. This is illustrated on a number of representative numerical test cases.

  15. An Efficient Circulant MIMO Equalizer for CDMA Downlink: Algorithm and VLSI Architecture

    NASA Astrophysics Data System (ADS)

    Guo, Yuanbin; Zhang, Jianzhong(Charlie); McCain, Dennis; Cavallaro, Joseph R.

    2006-12-01

    We present an efficient circulant approximation-based MIMO equalizer architecture for the CDMA downlink. This reduces the direct matrix inverse (DMI) of size[InlineEquation not available: see fulltext.] with[InlineEquation not available: see fulltext.] complexity to some FFT operations with[InlineEquation not available: see fulltext.] complexity and the inverse of some[InlineEquation not available: see fulltext.] submatrices. We then propose parallel and pipelined VLSI architectures with Hermitian optimization and reduced-state FFT for further complexity optimization. Generic VLSI architectures are derived for the[InlineEquation not available: see fulltext.] high-order receiver from partitioned[InlineEquation not available: see fulltext.] submatrices. This leads to more parallel VLSI design with[InlineEquation not available: see fulltext.] further complexity reduction. Comparative study with both the conjugate-gradient and DMI algorithms shows very promising performance/complexity tradeoff. VLSI design space in terms of area/time efficiency is explored extensively for layered parallelism and pipelining with a Catapult C high-level-synthesis methodology.

  16. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  17. Architecture as Design Study.

    ERIC Educational Resources Information Center

    Kauppinen, Heta

    1989-01-01

    Explores the use of analogies in architectural design, the importance of Gestalt theory and aesthetic cannons in understanding and being sensitive to architecture. Emphasizes the variation between public and professional appreciation of architecture. Notes that an understanding of architectural process enables students to improve the aesthetic…

  18. Adaptive network countermeasures.

    SciTech Connect

    McClelland-Bane, Randy; Van Randwyk, Jamie A.; Carathimas, Anthony G.; Thomas, Eric D.

    2003-10-01

    This report describes the results of a two-year LDRD funded by the Differentiating Technologies investment area. The project investigated the use of countermeasures in protecting computer networks as well as how current countermeasures could be changed in order to adapt with both evolving networks and evolving attackers. The work involved collaboration between Sandia employees and students in the Sandia - California Center for Cyber Defenders (CCD) program. We include an explanation of the need for adaptive countermeasures, a description of the architecture we designed to provide adaptive countermeasures, and evaluations of the system.

  19. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    NASA Technical Reports Server (NTRS)

    Barkeshli, Kasra; Volakis, John L.

    1991-01-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  20. Fast acquisition of high resolution 4-D amide-amide NOESY with diagonal suppression, sparse sampling and FFT-CLEAN.

    PubMed

    Werner-Allen, Jon W; Coggins, Brian E; Zhou, Pei

    2010-05-01

    Amide-amide NOESY provides important distance constraints for calculating global folds of large proteins, especially integral membrane proteins with beta-barrel folds. Here, we describe a diagonal-suppressed 4-D NH-NH TROSY-NOESY-TROSY (ds-TNT) experiment for NMR studies of large proteins. The ds-TNT experiment employs a spin state selective transfer scheme that suppresses diagonal signals while providing TROSY optimization in all four dimensions. Active suppression of the strong diagonal peaks greatly reduces the dynamic range of observable signals, making this experiment particularly suitable for use with sparse sampling techniques. To demonstrate the utility of this method, we collected a high resolution 4-D ds-TNT spectrum of a 23kDa protein using randomized concentric shell sampling (RCSS), and we used FFT-CLEAN processing for further reduction of aliasing artifacts - the first application of these techniques to a NOESY experiment. A comparison of peak parameters in the high resolution 4-D dataset with those from a conventionally-sampled 3-D control spectrum shows an accurate reproduction of NOE crosspeaks in addition to a significant reduction in resonance overlap, which largely eliminates assignment ambiguity. Likewise, a comparison of 4-D peak intensities and volumes before and after application of the CLEAN procedure demonstrates that the reduction of aliasing artifacts by CLEAN does not systematically distort NMR signals.

  1. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    NASA Astrophysics Data System (ADS)

    Barkeshli, Kasra; Volakis, John L.

    1991-05-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  2. Space Telecommunications Radio Architecture (STRS): Technical Overview

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  3. Further Development of the FFT-based Method for Atomistic Modeling of Protein Folding and Binding under Crowding: Optimization of Accuracy and Speed.

    PubMed

    Qin, Sanbo; Zhou, Huan-Xiang

    2014-07-01

    Recently, we (Qin, S.; Zhou, H. X. J. Chem. Theory Comput.2013, 9, 4633-4643) developed the FFT-based method for Modeling Atomistic Proteins-crowder interactions, henceforth FMAP. Given its potential wide use for calculating effects of crowding on protein folding and binding free energies, here we aimed to optimize the accuracy and speed of FMAP. FMAP is based on expressing protein-crowder interactions as correlation functions and evaluating the latter via fast Fourier transform (FFT). The numerical accuracy of FFT improves as the grid spacing for discretizing space is reduced, but at increasing computational cost. We sought to speed up FMAP calculations by using a relatively coarse grid spacing of 0.6 Å and then correcting for discretization errors. This strategy was tested for different types of interactions (hard-core repulsion, nonpolar attraction, and electrostatic interaction) and over a wide range of protein-crowder systems. We were able to correct for the numerical errors on hard-core repulsion and nonpolar attraction by an 8% inflation of atomic hard-core radii and on electrostatic interaction by a 5% inflation of the magnitudes of protein atomic charges. The corrected results have higher accuracy and enjoy a speedup of more than 100-fold over those obtained using a fine grid spacing of 0.15 Å. With this optimization of accuracy and speed, FMAP may become a practical tool for realistic modeling of protein folding and binding in cell-like environments.

  4. Adaptive nonlinear flight control

    NASA Astrophysics Data System (ADS)

    Rysdyk, Rolf Theoduor

    1998-08-01

    Research under supervision of Dr. Calise and Dr. Prasad at the Georgia Institute of Technology, School of Aerospace Engineering. has demonstrated the applicability of an adaptive controller architecture. The architecture successfully combines model inversion control with adaptive neural network (NN) compensation to cancel the inversion error. The tiltrotor aircraft provides a specifically interesting control design challenge. The tiltrotor aircraft is capable of converting from stable responsive fixed wing flight to unstable sluggish hover in helicopter configuration. It is desirable to provide the pilot with consistency in handling qualities through a conversion from fixed wing flight to hover. The linear model inversion architecture was adapted by providing frequency separation in the command filter and the error-dynamics, while not exiting the actuator modes. This design of the architecture provides for a model following setup with guaranteed performance. This in turn allowed for convenient implementation of guaranteed handling qualities. A rigorous proof of boundedness is presented making use of compact sets and the LaSalle-Yoshizawa theorem. The analysis allows for the addition of the e-modification which guarantees boundedness of the NN weights in the absence of persistent excitation. The controller is demonstrated on the Generic Tiltrotor Simulator of Bell-Textron and NASA Ames R.C. The model inversion implementation is robustified with respect to unmodeled input dynamics, by adding dynamic nonlinear damping. A proof of boundedness of signals in the system is included. The effectiveness of the robustification is also demonstrated on the XV-15 tiltrotor. The SHL Perceptron NN provides a more powerful application, based on the universal approximation property of this type of NN. The SHL NN based architecture is also robustified with the dynamic nonlinear damping. A proof of boundedness extends the SHL NN augmentation with robustness to unmodeled actuator

  5. Architecture-Centric Software Quality Management

    NASA Astrophysics Data System (ADS)

    Maciaszek, Leszek A.

    Software quality is a multi-faceted concept defined using different attributes and models. From all various quality requirements, the quality of adaptiveness is by far most critical. Based on this assumption, this paper offers an architecture-centric approach to production of measurably-adaptive systems. The paper uses the PCBMER (Presentation, Controller, Bean, Mediator, Entity, and Resource) meta-architecture to demonstrate how complexity of a software solution can be measured and kept under control in standalone applications. Meta-architectural extensions aimed at managing quality in integration development projects are also introduced. The DSM (Design Structure Matrix) method is used to explain our approach to measure the quality. The discussion is conducted against the background of the holonic approach to science (as the middle-ground between holism and reductionism).

  6. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  7. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  8. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  9. Distributed multiport memory architecture

    NASA Technical Reports Server (NTRS)

    Kohl, W. H. (Inventor)

    1983-01-01

    A multiport memory architecture is diclosed for each of a plurality of task centers connected to a command and data bus. Each task center, includes a memory and a plurality of devices which request direct memory access as needed. The memory includes an internal data bus and an internal address bus to which the devices are connected, and direct timing and control logic comprised of a 10-state ring counter for allocating memory devices by enabling AND gates connected to the request signal lines of the devices. The outputs of AND gates connected to the same device are combined by OR gates to form an acknowledgement signal that enables the devices to address the memory during the next clock period. The length of the ring counter may be effectively lengthened to any multiple of ten to allow for more direct memory access intervals in one repetitive sequence. One device is a network bus adapter which serially shifts onto the command and data bus, a data word (8 bits plus control and parity bits) during the next ten direct memory access intervals after it has been granted access. The NBA is therefore allocated only one access in every ten intervals, which is a predetermined interval for all centers. The ring counters of all centers are periodically synchronized by DMA SYNC signal to assure that all NBAs be able to function in synchronism for data transfer from one center to another.

  10. Genomics of local adaptation with gene flow.

    PubMed

    Tigano, Anna; Friesen, Vicki L

    2016-05-01

    Gene flow is a fundamental evolutionary force in adaptation that is especially important to understand as humans are rapidly changing both the natural environment and natural levels of gene flow. Theory proposes a multifaceted role for gene flow in adaptation, but it focuses mainly on the disruptive effect that gene flow has on adaptation when selection is not strong enough to prevent the loss of locally adapted alleles. The role of gene flow in adaptation is now better understood due to the recent development of both genomic models of adaptive evolution and genomic techniques, which both point to the importance of genetic architecture in the origin and maintenance of adaptation with gene flow. In this review, we discuss three main topics on the genomics of adaptation with gene flow. First, we investigate selection on migration and gene flow. Second, we discuss the three potential sources of adaptive variation in relation to the role of gene flow in the origin of adaptation. Third, we explain how local adaptation is maintained despite gene flow: we provide a synthesis of recent genomic models of adaptation, discuss the genomic mechanisms and review empirical studies on the genomics of adaptation with gene flow. Despite predictions on the disruptive effect of gene flow in adaptation, an increasing number of studies show that gene flow can promote adaptation, that local adaptations can be maintained despite high gene flow, and that genetic architecture plays a fundamental role in the origin and maintenance of local adaptation with gene flow.

  11. GTE: a new FFT based software to compute terrain correction on airborne gravity surveys in spherical approximation.

    NASA Astrophysics Data System (ADS)

    Capponi, Martina; Sampietro, Daniele; Sansò, Fernando

    2016-04-01

    The computation of the vertical attraction due to the topographic masses (Terrain Correction) is still a matter of study both in geodetic as well as in geophysical applications. In fact it is required in high precision geoid estimation by the remove-restore technique and it is used to isolate the gravitational effect of anomalous masses in geophysical exploration. This topographical effect can be evaluated from the knowledge of a Digital Terrain Model in different ways: e.g. by means of numerical integration, by prisms, tesseroids, polyedra or Fast Fourier Transform (FFT) techniques. The increasing resolution of recently developed digital terrain models, the increasing number of observation points due to extensive use of airborne gravimetry and the increasing accuracy of gravity data represents nowadays major issues for the terrain correction computation. Classical methods such as prism or point masses approximations are indeed too slow while Fourier based techniques are usually too approximate for the required accuracy. In this work a new software, called Gravity Terrain Effects (GTE), developed in order to guarantee high accuracy and fast computation of terrain corrections is presented. GTE has been thought expressly for geophysical applications allowing the computation not only of the effect of topographic and bathymetric masses but also those due to sedimentary layers or to the Earth crust-mantle discontinuity (the so called Moho). In the present contribution we summarize the basic theory of the software and its practical implementation. Basically the GTE software is based on a new algorithm which, by exploiting the properties of the Fast Fourier Transform, allows to quickly compute the terrain correction, in spherical approximation, at ground or airborne level. Some tests to prove its performances are also described showing GTE capability to compute high accurate terrain corrections in a very short time. Results obtained for a real airborne survey with GTE

  12. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  13. World Ships - Architectures & Feasibility Revisited

    NASA Astrophysics Data System (ADS)

    Hein, A. M.; Pak, M.; Putz, D.; Buhler, C.; Reiss, P.

    A world ship is a concept for manned interstellar flight. It is a huge, self-contained and self-sustained interstellar vehicle. It travels at a fraction of a per cent of the speed of light and needs several centuries to reach its target star system. The well- known world ship concept by Alan Bond and Anthony Martin was intended to show its principal feasibility. However, several important issues haven't been addressed so far: the relationship between crew size and robustness of knowledge transfer, reliability, and alternative mission architectures. This paper addresses these gaps. Furthermore, it gives an update on target star system choice, and develops possible mission architectures. The derived conclusions are: a large population size leads to robust knowledge transfer and cultural adaptation. These processes can be improved by new technologies. World ship reliability depends on the availability of an automatic repair system, as in the case of the Daedalus probe. Star systems with habitable planets are probably farther away than systems with enough resources to construct space colonies. Therefore, missions to habitable planets have longer trip times and have a higher risk of mission failure. On the other hand, the risk of constructing colonies is higher than to establish an initial settlement on a habitable planet. Mission architectures with precursor probes have the potential to significantly reduce trip and colonization risk without being significantly more costly than architectures without. In summary world ships remain an interesting concept, although they require a space colony-based civilization within our own solar system before becoming feasible.

  14. Advanced HF anti-jam network architecture

    NASA Astrophysics Data System (ADS)

    Jackson, E. M.; Horner, Robert W.; Cai, Khiem V.

    The Hughes HF2000 system was developed using a flexible architecture which utilizes a wideband RF front-end and extensive digital signal processing. The HF2000 antijamming (AJ) mode was field tested via an HF skywave path between Fullerton, CA and Carlsbad, CA (about 100 miles), and it was shown that reliable fast frequency-hopping data transmission is feasible at 2400 b/s without adaptive equalization. The necessary requirements of an HF communication network are discussed, and how the HF2000 AJ mode can be used to support those requirements is shown. The Hughes HF2000 AJ mode system architecture is presented.

  15. Grid Architecture 2

    SciTech Connect

    Taft, Jeffrey D.

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  16. An FFT-based method for modeling protein folding and binding under crowding: benchmarking on ellipsoidal and all-atom crowders.

    PubMed

    Qin, Sanbo; Zhou, Huan-Xiang

    2013-10-01

    It is now well recognized that macromolecular crowding can exert significant effects on protein folding and binding stability. In order to calculate such effects in direct simulations of proteins mixed with bystander macromolecules, the latter (referred to as crowders) are usually modeled as spheres and the proteins represented at a coarse-grained level. Our recently developed postprocessing approach allows the proteins to be represented at the all-atom level but, for computational efficiency, has only been implemented for spherical crowders. Modeling crowder molecules in cellular environments and in vitro experiments as spheres may distort their effects on protein stability. Here we present a new method that is capable for treating aspherical crowders. The idea, borrowed from protein-protein docking, is to calculate the excess chemical potential of the proteins in crowded solution by fast Fourier transform (FFT). As the first application, we studied the effects of ellipsoidal crowders on the folding and binding free energies of all-atom proteins, and found, in agreement with previous direct simulations with coarse-grained protein models, that the aspherical crowders exert greater stabilization effects than spherical crowders of the same volume. Moreover, as demonstrated here, the FFT-based method has the important property that its computational cost does not increase strongly even when the level of details in representing the crowders is increased all the way to all-atom, thus significantly accelerating realistic modeling of protein folding and binding in cell-like environments. PMID:24187527

  17. FTS2000 network architecture

    NASA Technical Reports Server (NTRS)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  18. Generic POCC architectures

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This document describes a generic POCC (Payload Operations Control Center) architecture based upon current POCC software practice, and several refinements to the architecture based upon object-oriented design principles and expected developments in teleoperations. The current-technology generic architecture is an abstraction based upon close analysis of the ERBS, COBE, and GRO POCC's. A series of three refinements is presented: these may be viewed as an approach to a phased transition to the recommended architecture. The third refinement constitutes the recommended architecture, which, together with associated rationales, will form the basis of the rapid synthesis environment to be developed in the remainder of this task. The document is organized into two parts. The first part describes the current generic architecture using several graphical as well as tabular representations or 'views.' The second part presents an analysis of the generic architecture in terms of object-oriented principles. On the basis of this discussion, refinements to the generic architecture are presented, again using a combination of graphical and tabular representations.

  19. Teaching American Indian Architecture.

    ERIC Educational Resources Information Center

    Winchell, Dick

    1991-01-01

    Reviews "Native American Architecture," by Nabokov and Easton, an encyclopedic work that examines technology, climate, social structure, economics, religion, and history in relation to house design and the "meaning" of space among tribes of nine regions. Describes this book's use in a college course on Native American architecture. (SV)

  20. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  1. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  2. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  3. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  4. Applying neuroscience to architecture.

    PubMed

    Eberhard, John P

    2009-06-25

    Architectural practice and neuroscience research use our brains and minds in much the same way. However, the link between neuroscience knowledge and architectural design--with rare exceptions--has yet to be made. The concept of linking these two fields is a challenge worth considering.

  5. The Technology of Architecture

    ERIC Educational Resources Information Center

    Reese, Susan

    2006-01-01

    This article discusses how career and technical education is helping students draw up plans for success in architectural technology. According to the College of DuPage (COD) in Glen Ellyn, Illinois, one of the two-year schools offering training in architectural technology, graduates have a number of opportunities available to them. They may work…

  6. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  7. Methodology requirements for intelligent systems architecture

    NASA Technical Reports Server (NTRS)

    Grant, Terry; Colombano, Silvano

    1987-01-01

    The methodology required for the development of the 'intelligent system architecture' of distributed computer systems which integrate standard data processing capabilities with symbolic processing to provide powerful and highly autonomous adaptive processing capabilities must encompass three elements: (1) a design knowledge capture system, (2) computer-aided engineering, and (3) verification and validation metrics and tests. Emphasis must be put on the earliest possible definition of system requirements and the realistic definition of allowable system uncertainties. Methodologies must also address human factor issues.

  8. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  9. Next Generation Mass Memory Architecture

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Stahle, M.; Lonsdorfer, U.; Binzer, N.

    2010-08-01

    Future Mass Memory units will have to cope with various demanding requirements driven by onboard instruments (optical and SAR) that generate a huge amount of data (>10TBit) at a data rate > 6 Gbps. For downlink data rates around 3 Gbps will be feasible using latest ka-band technology together with Variable Coding and Modulation (VCM) techniques. These high data rates and storage capacities need to be effectively managed. Therefore, data structures and data management functions have to be improved and adapted to existing standards like the Packet Utilisation Standard (PUS). In this paper we will present a highly modular and scalable architectural approach for mass memories in order to support a wide range of mission requirements.

  10. Architectural design for space tourism

    NASA Astrophysics Data System (ADS)

    Martinez, Vera

    2009-01-01

    The paper describes the main issues for the design of an appropriately planned habitat for tourists in space. Due study and analysis of the environment of space stations (ISS, MIR, Skylab) delineate positive and negative aspects of architectonical design. Analysis of the features of architectonical design for touristic needs and verification of suitability with design for space habitat. Space tourism environment must offer a high degree of comfort and suggest correct behavior of the tourists. This is intended for the single person as well as for the group. Two main aspects of architectural planning will be needed: the design of the private sphere and the design of the public sphere. To define the appearance of environment there should be paid attention to some main elements like the materiality of surfaces used; the main shapes of areas and the degree of flexibility and adaptability of the environment to specific needs.

  11. Systolic Architectures For Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Hwang, J. N.; Vlontzos, J. A.; Kung, S. Y.

    1988-10-01

    This paper proposes an unidirectional ring systolic architecture for implementing the hidden Markov models (HMMs). This array architecture maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. Both the scoring and learning phases of an HMM are formulated as a consecutive matrix-vector multiplication problem, which can be executed in a fully pipelined fashion (100% utilization effi-ciency) by using an unidirectional ring systolic architecture. By appropriately scheduling the algorithm, which combines both the operations of the backward evaluation procedure and reestimation algorithm at the same time, we can use this systolic HMM in a most efficient manner. The systolic HMM can also be easily adapted to the left-to-right HMM by using bidirectional semi-global links with significant time saving. This architecture can also incorporate the scaling scheme with little extra effort in the computations of forward and backward evaluation variables to prevent the frequently encountered mathematical undertow problems. We also discuss a possible implementation of this proposed architecture using Inmos transputer (T-800) as the building block.

  12. Fast notification architecture for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hahk

    2013-03-01

    In an emergency, since it is vital to transmit the message to the users immediately after analysing the data to prevent disaster, this article presents the deployment of a fast notification architecture for a wireless sensor network. The sensor nodes of the proposed architecture can monitor an emergency situation periodically and transmit the sensing data, immediately to the sink node. We decide on the grade of fire situation according to the decision rule using the sensing values of temperature, CO, smoke density and temperature increasing rate. On the other hand, to estimate the grade of air pollution, the sensing data, such as dust, formaldehyde, NO2, CO2, is applied to the given knowledge model. Since the sink node in the architecture has a ZigBee interface, it can transmit the alert messages in real time according to analysed results received from the host server to the terminals equipped with a SIM card-type ZigBee module. Also, the host server notifies the situation to the registered users who have cellular phone through short message service server of the cellular network. Thus, the proposed architecture can adapt an emergency situation dynamically compared to the conventional architecture using video processing. In the testbed, after generating air pollution and fire data, the terminal receives the message in less than 3 s. In the test results, this system can also be applied to buildings and public areas where many people gather together, to prevent unexpected disasters in urban settings.

  13. Context Aware Middleware Architectures: Survey and Challenges.

    PubMed

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988

  14. Context Aware Middleware Architectures: Survey and Challenges.

    PubMed

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

  15. Context Aware Middleware Architectures: Survey and Challenges

    PubMed Central

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988

  16. Beethoven: architecture for media telephony

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Ohtonen, Timo; Sauvola, Jaakko J.

    1999-11-01

    This paper presents a new architecture and techniques for media-based telephony over wireless/wireline IP networks, called `Beethoven'. The platform supports complex media transport and mobile conferencing for multi-user environments having a non-uniform access. New techniques are presented to provide advanced multimedia call management over different media types and their presentation. The routing and distribution of the media is rendered over the standards based protocol. Our approach offers a generic, distributed and object-oriented solution having interfaces, where signal processing and unified messaging algorithms are embedded as instances of core classes. The platform services are divided into `basic communication', `conferencing' and `media session'. The basic communication form platform core services and supports access from scalable user interface to network end-points. Conferencing services take care of media filter adaptation, conversion, error resiliency, multi-party connection and event signaling, while the media session services offer resources for application-level communication between the terminals. The platform allows flexible attachment of any number of plug-in modules, and thus we use it as a test bench for multiparty/multi-point conferencing and as an evaluation bench for signal coding algorithms. In tests, our architecture showed the ability to easily be scaled from simple voice terminal to complex multi-user conference sharing virtual data.

  17. A biologically inspired MANET architecture

    NASA Astrophysics Data System (ADS)

    Kershenbaum, Aaron; Pappas, Vasileios; Lee, Kang-Won; Lio, Pietro; Sadler, Brian; Verma, Dinesh

    2008-04-01

    Mobile Ad-Hoc Networks (MANETs), that do not rely on pre-existing infrastructure and that can adapt rapidly to changes in their environment, are coming into increasingly wide use in military applications. At the same time, the large computing power and memory available today even for small, mobile devices, allows us to build extremely large, sophisticated and complex networks. Such networks, however, and the software controlling them are potentially vulnerable to catastrophic failures because of their size and complexity. Biological networks have many of these same characteristics and are potentially subject to the same problems. But in successful organisms, these biological networks do in fact function well so that the organism can survive. In this paper, we present a MANET architecture developed based on a feature, called homeostasis, widely observed in biological networks but not ordinarily seen in computer networks. This feature allows the network to switch to an alternate mode of operation under stress or attack and then return to the original mode of operation after the problem has been resolved. We explore the potential benefits such an architecture has, principally in terms of the ability to survive radical changes in its environment using an illustrative example.

  18. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  19. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  20. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  1. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  2. An out-of-core high-resolution FFT algorithm for determining large-scale imperfections of surface potentials in crystals

    NASA Astrophysics Data System (ADS)

    Bakhos, M.; Vincent, A. P.; Yuen, D. A.

    2005-06-01

    We present a simple out-of-core algorithm for computing the Fast-Fourier Transform (FFT) needed to determine the two-dimensional potential of surface crystals with large-scale features, like faults, at ultra-high resolution, with around 10 9 grid points. This algorithm represents a proof of concept that a simple and easy-to-code, out-of-core algorithm can be easily implemented and used to solve large-scale problems on low-cost hardware. The main novelties of our algorithm are: (1) elapsed and I/O times decrease with the number of single records (lines) being read; (2) only basic reading and writing routines is necessary for making the out-of-core access. Our method can be easily extended to 3D and be applied to many grand-challenge problems in science and engineering, such as fluid dynamics.

  3. Synthesis and operation of an FFT-decoupled fixed-order reversed-field pinch plasma control system based on identification data

    NASA Astrophysics Data System (ADS)

    Olofsson, K. Erik J.; Brunsell, Per R.; Witrant, Emmanuel; Drake, James R.

    2010-10-01

    Recent developments and applications of system identification methods for the reversed-field pinch (RFP) machine EXTRAP T2R have yielded plasma response parameters for decoupled dynamics. These data sets are fundamental for a real-time implementable fast Fourier transform (FFT) decoupled discrete-time fixed-order strongly stabilizing synthesis as described in this work. Robustness is assessed over the data set by bootstrap calculation of the sensitivity transfer function worst-case H_{\\infty} -gain distribution. Output tracking and magnetohydrodynamic mode m = 1 tracking are considered in the same framework simply as two distinct weighted traces of a performance channel output-covariance matrix as derived from the closed-loop discrete-time Lyapunov equation. The behaviour of the resulting multivariable controller is investigated with dedicated T2R experiments.

  4. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  5. Operational Concepts for a Generic Space Exploration Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Vaden, Karl R.; Jones, Robert E.; Roberts, Anthony M.

    2015-01-01

    This document is one of three. It describes the Operational Concept (OpsCon) for a generic space exploration communication architecture. The purpose of this particular document is to identify communication flows and data types. Two other documents accompany this document, a security policy profile and a communication architecture document. The operational concepts should be read first followed by the security policy profile and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes: subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.

  6. An Architecture for Continuous Data Quality Monitoring in Medical Centers.

    PubMed

    Endler, Gregor; Schwab, Peter K; Wahl, Andreas M; Tenschert, Johannes; Lenz, Richard

    2015-01-01

    In the medical domain, data quality is very important. Since requirements and data change frequently, continuous and sustainable monitoring and improvement of data quality is necessary. Working together with managers of medical centers, we developed an architecture for a data quality monitoring system. The architecture enables domain experts to adapt the system during runtime to match their specifications using a built-in rule system. It also allows arbitrarily complex analyses to be integrated into the monitoring cycle. We evaluate our architecture by matching its components to the well-known data quality methodology TDQM.

  7. Renal adaptation during hibernation.

    PubMed

    Jani, Alkesh; Martin, Sandra L; Jain, Swati; Keys, Daniel; Edelstein, Charles L

    2013-12-01

    Hibernators periodically undergo profound physiological changes including dramatic reductions in metabolic, heart, and respiratory rates and core body temperature. This review discusses the effect of hypoperfusion and hypothermia observed during hibernation on glomerular filtration and renal plasma flow, as well as specific adaptations in renal architecture, vasculature, the renin-angiotensin system, and upregulation of possible protective mechanisms during the extreme conditions endured by hibernating mammals. Understanding the mechanisms of protection against organ injury during hibernation may provide insights into potential therapies for organ injury during cold storage and reimplantation during transplantation.

  8. Using natural variation to investigate the function of individual amino acids in the sucrose-binding box of fructan:fructan 6G-fructosyltransferase (6G-FFT) in product formation.

    PubMed

    Ritsema, Tita; Verhaar, Auke; Vijn, Irma; Smeekens, Sjef

    2005-07-01

    Enzymes of the glycosyl hydrolase family 32 are highly similar with respect to primary sequence but catalyze divergent reactions. Previously, the importance of the conserved sucrose-binding box in determining product specificity of onion fructan:fructan 6G-fructosyltransferase (6G-FFT) was established [Ritsema et al., 2004, Plant Mol. Biol. 54: 853-863]. Onion 6G-FFT synthesizes the complex fructan neo-series inulin by transferring fructose residues to either a terminal fructose or a terminal glucose residue. In the present study we have elucidated the molecular determinants of product specificity by substitution of individual amino acids of the sucrose binding box with amino acids that are present on homologous positions in other fructosyltransferases or vacuolar invertases. Substituting the presumed nucleophile Asp85 of the beta-fructosidase motif resulted in an inactive enzyme. 6G-FFT mutants S87N and S87D did not change substrate or product specificities, whereas mutants N84Y and N84G resulted in an inactive enzyme. Most interestingly, mutants N84S, N84A, and N84Q added fructose residues preferably to a terminal fructose and hardly to the terminal glucose. This resulted in the preferential production of inulin-type fructans. Combining mutations showed that amino acid 84 determines product specificity of 6G-FFT irrespective of the amino acid at position 87. PMID:16158237

  9. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  10. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing. PMID:18555727

  11. The architectural relevance of cybernetics

    SciTech Connect

    Frazer, J.H.

    1993-12-31

    This title is taken from an article by Gordon Pask in Architectural Design September 1969. It raises a number of questions which this article attempts to answer. How did Gordon come to be writing for an architectural publication? What was his contribution to architecture? How does he now come to be on the faculty of a school of architecture? And what indeed is the architectural relevance of cybernetics? 12 refs.

  12. Fireplace adapters

    SciTech Connect

    Hunt, R.L.

    1983-12-27

    An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.

  13. Initial concepts for CELT adaptive optics

    NASA Astrophysics Data System (ADS)

    Dekany, Richard G.; Bauman, Brian J.; Gavel, Donald T.; Troy, Mitchell; Macintosh, Bruce A.; Britton, Matthew C.

    2003-02-01

    The California Extremely Large Telescope (CELT) project has recently completed a 12-month conceptual design phase that has investigated major technology challenges in a number of Observatory subsystems, including adaptive optics (AO). The goal of this effort was not to adopt one or more specific AO architectures. Rather, it was to investigate the feasibility of adaptive optics correction of a 30-meter diameter telescope and to suggest realistic cost ceilings for various adaptive optics capabilities. We present here the key design issues uncovered during conceptual design and present two non-exclusive ‘baseline" adaptive optics concepts that are expected to be further developed during the following preliminary design phase. Further analysis, detailed engineering trade studies, and certain laboratory and telescope experiments must be performed, and key component technology prototypes demonstrated, prior to adopting one or more adaptive optics systems architectures for realization.

  14. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  15. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  16. Protein domain architectures.

    PubMed

    Mulder, Nicola J

    2010-01-01

    Proteins are composed of functional units, or domains, that can be found alone or in combination with other domains. Analysis of protein domain architectures and the movement of protein domains within and across different genomes provide clues about the evolution of protein function. The classification of proteins into families and domains is provided through publicly available tools and databases that use known protein domains to predict other members in new proteins sequences. Currently at least 80% of the main protein sequence databases can be classified using these tools, thus providing a large data set to work from for analyzing protein domain architectures. Each of the protein domain databases provide intuitive web interfaces for viewing and analyzing their domain classifications and provide their data freely for downloading. Some of the main protein family and domain databases are described here, along with their Web-based tools for analyzing domain architectures.

  17. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  18. Hybrid polarity SAR architecture

    NASA Astrophysics Data System (ADS)

    Raney, R. Keith

    2009-05-01

    A space-based synthetic aperture radar (SAR) designed to provide quantitative information on a global scale implies severe requirements to maximize coverage and to sustain reliable operational calibration. These requirements are best served by the hybrid-polarity architecture, in which the radar transmits in circular polarization, and receives on two orthogonal linear polarizations, coherently, retaining their relative phase. This paper summarizes key attributes of hybrid-polarity dual- and quadrature-polarized SARs, reviews the associated advantages, formalizes conditions under which the signal-to-noise ratio is conserved, and describes the evolution of this architecture from first principles.

  19. D Architectural Videomapping

    NASA Astrophysics Data System (ADS)

    Catanese, R.

    2013-07-01

    3D architectural mapping is a video projection technique that can be done with a survey of a chosen building in order to realize a perfect correspondence between its shapes and the images in projection. As a performative kind of audiovisual artifact, the real event of the 3D mapping is a combination of a registered video animation file with a real architecture. This new kind of visual art is becoming very popular and its big audience success testifies new expressive chances in the field of urban design. My case study has been experienced in Pisa for the Luminara feast in 2012.

  20. Lunar architecture and urbanism

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    1992-01-01

    Human civilization and architecture have defined each other for over 5000 years on Earth. Even in the novel environment of space, persistent issues of human urbanism will eclipse, within a historically short time, the technical challenges of space settlement that dominate our current view. By adding modern topics in space engineering, planetology, life support, human factors, material invention, and conservation to their already renaissance array of expertise, urban designers can responsibly apply ancient, proven standards to the exciting new opportunities afforded by space. Inescapable facts about the Moon set real boundaries within which tenable lunar urbanism and its component architecture must eventually develop.

  1. A component simulator architecture

    NASA Astrophysics Data System (ADS)

    Bégin, M.-E.; Walsh, T.

    2002-07-01

    This paper describes the current state of our new component simulator architecture. This design is being developed at VEGA GmbH, by the Technology Group, within the Space Business Unit. This paper describes our overall component architecture and attempts to explain how it can be used by model developers and end-users. At the time of writing, it appears clear that a certain level of automation is required to increase the usability of the system. This automation is only briefly discussed here.

  2. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  3. Low Power Adder Based Auditory Filter Architecture

    PubMed Central

    Jayanthi, V. S.

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%. PMID:25506073

  4. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  5. UAV Cooperation Architectures for Persistent Sensing

    SciTech Connect

    Roberts, R S; Kent, C A; Jones, E D

    2003-03-20

    With the number of small, inexpensive Unmanned Air Vehicles (UAVs) increasing, it is feasible to build multi-UAV sensing networks. In particular, by using UAVs in conjunction with unattended ground sensors, a degree of persistent sensing can be achieved. With proper UAV cooperation algorithms, sensing is maintained even though exceptional events, e.g., the loss of a UAV, have occurred. In this paper a cooperation technique that allows multiple UAVs to perform coordinated, persistent sensing with unattended ground sensors over a wide area is described. The technique automatically adapts the UAV paths so that on the average, the amount of time that any sensor has to wait for a UAV revisit is minimized. We also describe the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture. This architecture is designed to help simulate and operate distributed sensor networks where multiple UAVs are used to collect data.

  6. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  7. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  8. American School & University Architectural Portfolio 2000 Awards: Landscape Architecture.

    ERIC Educational Resources Information Center

    American School & University, 2000

    2000-01-01

    Presents photographs and basic information on architectural design, costs, square footage, and principle designers of the award winning school landscaping projects that competed in the American School & University Architectural Portfolio 2000. (GR)

  9. MWAHCA: a multimedia wireless ad hoc cluster architecture.

    PubMed

    Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.

  10. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  11. Geostar's system architectures

    NASA Technical Reports Server (NTRS)

    Lepkowski, Ronald J.

    1989-01-01

    Geostar is currently constructing a radiodetermination satellite system to provide position fixes and vehicle surveillance services, and has proposed a digital land mobile satellite service to provide data, facsimile and digitized voice services to low cost mobile users. The different system architectures for these two systems, are reviewed.

  12. INL Generic Robot Architecture

    SciTech Connect

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  13. Emulating an MIMD architecture

    SciTech Connect

    Su Bogong; Grishman, R.

    1982-01-01

    As part of a research effort in parallel processor architecture and programming, the ultracomputer group at New York University has performed extensive simulation of parallel programs. To speed up these simulations, a parallel processor emulator, using the microprogrammable Puma computer system previously designed and built at NYU, has been developed. 8 references.

  14. [Architecture, budget and dignity].

    PubMed

    Morel, Etienne

    2012-01-01

    Drawing on its dynamic strengths, a psychiatric unit develops various projects and care techniques. In this framework, the institute director must make a number of choices with regard to architecture. Why renovate the psychiatry building? What financial investments are required? What criteria should be followed? What if the major argument was based on the respect of the patient's dignity?

  15. [Architecture and movement].

    PubMed

    Rivallan, Armel

    2012-01-01

    Leading an architectural project means accompanying the movement which it induces within the teams. Between questioning, uncertainty and fear, the organisational changes inherent to the new facility must be subject to constructive and ongoing exchanges. Ethics, safety and training are revised and the unit projects are sometimes modified.

  16. Making Connections through Architecture.

    ERIC Educational Resources Information Center

    Hollingsworth, Patricia

    1993-01-01

    The Center for Arts and Sciences (Oklahoma) developed an interdisciplinary curriculum for disadvantaged gifted children on styles of architecture, called "Discovering Patterns in the Built Environment." This article describes the content and processes used in the curriculum, as well as other programs of the center, such as teacher workshops,…

  17. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  18. Terra Harvest software architecture

    NASA Astrophysics Data System (ADS)

    Humeniuk, Dave; Klawon, Kevin

    2012-06-01

    Under the Terra Harvest Program, the DIA has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future UGS System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n'-play contributions that include controllers, various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute, is developing the Terra Harvest Open Source Environment (THOSE), a Java Virtual Machine (JVM) running on an embedded Linux Operating System. The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor-based evaluation platform that is both energy-efficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the design decisions for some of the key software components. Development process for THOSE is discussed as well.

  19. GNU debugger internal architecture

    SciTech Connect

    Miller, P.; Nessett, D.; Pizzi, R.

    1993-12-16

    This document describes the internal and architecture and implementation of the GNU debugger, gdb. Topics include inferior process management, command execution, symbol table management and remote debugging. Call graphs for specific functions are supplied. This document is not a complete description but offers a developer an overview which is the place to start before modification.

  20. Adaptive Computing.

    ERIC Educational Resources Information Center

    Harrell, William

    1999-01-01

    Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)

  1. Contour adaptation.

    PubMed

    Anstis, Stuart

    2013-01-01

    It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.

  2. Commanding Constellations (Pipeline Architecture)

    NASA Technical Reports Server (NTRS)

    Ray, Tim; Condron, Jeff

    2003-01-01

    Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.

  3. 11. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster General's Office Standard Plan 82, sheet 1. Lithograph on linen architectural drawing. April 1893 3 ELEVATIONS, 3 PLANS AND A PARTIAL SECTION - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  4. 12. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster Generals Office Standard Plan 82, sheet 2, April 1893. Lithograph on linen architectural drawing. DETAILS - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  5. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  6. A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Cwik, T.

    1994-01-01

    A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.

  7. Human Symbol Manipulation within an Integrated Cognitive Architecture

    ERIC Educational Resources Information Center

    Anderson, John R.

    2005-01-01

    This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children…

  8. India's Vernacular Architecture as a Reflection of Culture.

    ERIC Educational Resources Information Center

    Masalski, Kathleen Woods

    This paper contains the narrative for a slide presentation on the architecture of India. Through the narration, the geography and climate of the country and the social conditions of the Indian people are discussed. Roofs and windows are adapted for the hot, rainy climate, while the availability of building materials ranges from palm leaves to mud…

  9. An Experiment in Architectural Instruction.

    ERIC Educational Resources Information Center

    Dvorak, Robert W.

    1978-01-01

    Discusses the application of the PLATO IV computer-based educational system to a one-semester basic drawing course for freshman architecture, landscape architecture, and interior design students and relates student reactions to the experience. (RAO)

  10. An epigenetic toolkit allows for diverse genome architectures in eukaryotes.

    PubMed

    Maurer-Alcalá, Xyrus X; Katz, Laura A

    2015-12-01

    Genome architecture varies considerably among eukaryotes in terms of both size and structure (e.g. distribution of sequences within the genome, elimination of DNA during formation of somatic nuclei). The diversity in eukaryotic genome architectures and the dynamic processes are only possible due to the well-developed epigenetic toolkit, which probably existed in the Last Eukaryotic Common Ancestor (LECA). This toolkit may have arisen as a means of navigating the genomic conflict that arose from the expansion of transposable elements within the ancestral eukaryotic genome. This toolkit has been coopted to support the dynamic nature of genomes in lineages across the eukaryotic tree of life. Here we highlight how the changes in genome architecture in diverse eukaryotes are regulated by epigenetic processes, such as DNA elimination, genome rearrangements, and adaptive changes to genome architecture. The ability to epigenetically modify and regulate genomes has contributed greatly to the diversity of eukaryotes observed today.

  11. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme.

  12. Controlling Material Reactivity Using Architecture.

    PubMed

    Sullivan, Kyle T; Zhu, Cheng; Duoss, Eric B; Gash, Alexander E; Kolesky, David B; Kuntz, Joshua D; Lewis, Jennifer A; Spadaccini, Christopher M

    2016-03-01

    3D-printing methods are used to generate reactive material architectures. Several geometric parameters are observed to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. The architecture offers an additional route to control, at will, the energy release rate in reactive composite materials. PMID:26669517

  13. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  14. An intelligent CNC machine control system architecture

    SciTech Connect

    Miller, D.J.; Loucks, C.S.

    1996-10-01

    Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications using platform-independent software.

  15. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    PubMed Central

    2014-01-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor (RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires. PMID:25050088

  16. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    NASA Astrophysics Data System (ADS)

    Gerngross, Mark-Daniel; Carstensen, Jürgen; Föll, Helmut

    2014-06-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor ( RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires.

  17. Architectural Adventures in Your Community

    ERIC Educational Resources Information Center

    Henn, Cynthia A.

    2007-01-01

    Due to architecture's complexity, it can be challenging to develop lessons for the students, and consequently, the teaching of architecture is frequently overlooked. Every community has an architectural history. For example, the community in which the author's students live has a variety of historic houses from when the community originated (the…

  18. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  19. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  20. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  1. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  2. Instrumented Architectural Simulation System

    NASA Technical Reports Server (NTRS)

    Delagi, B. A.; Saraiya, N.; Nishimura, S.; Byrd, G.

    1987-01-01

    Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies, not just toy problems or small application fragements, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.

  3. Staged Event Architecture

    SciTech Connect

    Hoschek, Wolfgang; Berket, Karlo

    2005-05-30

    Sea is a framework for a Staged Event Architecture, designed around non-blocking asynchronous communication facilities that are decoupled from the threading model chosen by any given application, Components for P networking and in-memory communication are provided. The Sea Java library encapsulates these concepts. Sea is used to easily build efficient and flexible low-level network clients and servers, and in particular as a basic communication substrate for Peer-to-Peer applications.

  4. Aerobot Autonomy Architecture

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  5. Information systems definition architecture

    SciTech Connect

    Calapristi, A.J.

    1996-06-20

    The Tank Waste Remediation System (TWRS) Information Systems Definition architecture evaluated information Management (IM) processes in several key organizations. The intent of the study is to identify improvements in TWRS IM processes that will enable better support to the TWRS mission, and accommodate changes in TWRS business environment. The ultimate goals of the study are to reduce IM costs, Manage the configuration of TWRS IM elements, and improve IM-related process performance.

  6. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  7. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  8. Adaptive-array Electron Cyclotron Emission diagnostics using data streaming in a Software Defined Radio system

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Hamasaki, M.; Fujisawa, A.; Nagashima, Y.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; the QUEST Team

    2016-04-01

    Measurement of the Electron Cyclotron Emission (ECE) spectrum is one of the most popular electron temperature diagnostics in nuclear fusion plasma research. A 2-dimensional ECE imaging system was developed with an adaptive-array approach. A radio-frequency (RF) heterodyne detection system with Software Defined Radio (SDR) devices and a phased-array receiver antenna was used to measure the phase and amplitude of the ECE wave. The SDR heterodyne system could continuously measure the phase and amplitude with sufficient accuracy and time resolution while the previous digitizer system could only acquire data at specific times. Robust streaming phase measurements for adaptive-arrayed continuous ECE diagnostics were demonstrated using Fast Fourier Transform (FFT) analysis with the SDR system. The emission field pattern was reconstructed using adaptive-array analysis. The reconstructed profiles were discussed using profiles calculated from coherent single-frequency radiation from the phase array antenna.

  9. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    NASA Astrophysics Data System (ADS)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  10. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  11. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  12. Cognitive Architectures and Autonomy: A Comparative Review

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn; Helgasson, Helgi

    2012-05-01

    One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.

  13. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  14. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  15. Architectural Lessons: Look Back In Order To Move Forward

    NASA Astrophysics Data System (ADS)

    Huang, T.; Djorgovski, S. G.; Caltagirone, S.; Crichton, D. J.; Hughes, J. S.; Law, E.; Pilone, D.; Pilone, T.; Mahabal, A.

    2015-12-01

    True elegance of scalable and adaptable architecture is not about incorporating the latest and greatest technologies. Its elegance is measured by its ability to scale and adapt as its operating environment evolves over time. Architecture is the link that bridges people, process, policies, interfaces, and technologies. Architectural development begins by observe the relationships which really matter to the problem domain. It follows by the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use [C. Alexander, 1979]. Architects are the true artists. Like all masterpieces, the values and strength of architectures are measured not by the volumes of publications, it is measured by its ability to evolve. An architect must look back in order to move forward. This talk discusses some of the prior works including onboard data analysis system, knowledgebase system, cloud-based Big Data platform, as enablers to help shape the new generation of Earth Science projects at NASA and EarthCube where a community-driven architecture is the key to enable data-intensive science. [C. Alexander, The Timeless Way of Building, Oxford University, 1979.

  16. The path to adaptive microsystems

    NASA Astrophysics Data System (ADS)

    Zolper, John C.; Biercuk, Michael J.

    2006-05-01

    Scaling trends in microsystems are discussed frequently in the technical community, providing a short-term perspective on the future of integrated microsystems. This paper looks beyond the leading edge of technological development, focusing on new microsystem design paradigms that move far beyond today's systems based on static components. We introduce the concept of Adaptive Microsystems and outline a path to realizing these systems-on-a-chip. The role of DARPA in advancing future components and systems research is discussed, and specific DARPA efforts enabling and producing adaptive microsystems are presented. In particular, we discuss efforts underway in the DARPA Microsystems Technology Office (MTO) including programs in novel circuit architectures (3DIC), adaptive imaging and sensing (AFPA, VISA, MONTAGE, A-to-I) and reconfigurable RF/Microwave devices (SMART, TFAST, IRFFE).

  17. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  18. Integrating the services' imagery architectures

    NASA Astrophysics Data System (ADS)

    Mader, John F.

    1993-04-01

    Any military organization requiring imagery must deal with one or more of several architectures: the tactical architectures of the three military departments, the theater architectures, and their interfaces to a separate national architecture. A seamless, joint, integrated architecture must meet today's imagery requirements. The CIO's vision of 'the right imagery to the right people in the right format at the right time' would serve well as the objective of a joint, integrated architecture. A joint imagery strategy should be initially shaped by the four pillars of the National Military Strategy of the United States: strategic deterrence; forward presence; crisis response; and reconstitution. In a macro view, it must consist of a series of sub-strategies to include science and technology and research and development, maintenance of the imagery related industrial base, acquisition, resource management, and burden sharing. Common imagery doctrine must follow the imagery strategy. Most of all, control, continuity, and direction must be maintained with regard to organizations and systems development as the architecture evolves. These areas and more must be addressed to reach the long term goal of a joint, integrated imagery architecture. This will require the services and theaters to relinquish some sovereignty over at least systems development and acquisition. Nevertheless, the goal of a joint, integrated imagery architecture is feasible. The author presents arguments and specific recommendations to orient the imagery community in the direction of a joint, integrated imagery architecture.

  19. Emerging hierarchies in dynamically adapting webs

    NASA Astrophysics Data System (ADS)

    Katifori, Eleni; Graewer, Johannes; Magnasco, Marcelo; Modes, Carl

    Transport networks play a key role across four realms of eukaryotic life: slime molds, fungi, plants, and animals. In addition to the developmental algorithms that build them, many also employ adaptive strategies to respond to stimuli, damage, and other environmental changes. We model these adapting network architectures using a generic dynamical system on weighted graphs and find in simulation that these networks ultimately develop a hierarchical organization of the final weighted architecture accompanied by the formation of a system-spanning backbone. We quantify the hierarchical organization of the networks by developing an algorithm that decomposes the architecture to multiple scales and analyzes how the organization in each scale relates to that of the scale above and below it. The methodologies developed in this work are applicable to a wide range of systems including the slime mold physarum polycephalum, human microvasculature, and force chains in granular media.

  20. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In

  1. Teacher Adaptation to Open Learning Spaces

    ERIC Educational Resources Information Center

    Alterator, Scott; Deed, Craig

    2013-01-01

    The "open classroom" emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to…

  2. "Unwalling" the Classroom: Teacher Reaction and Adaptation

    ERIC Educational Resources Information Center

    Deed, Craig; Lesko, Thomas

    2015-01-01

    Modern open school architecture abstractly expresses ideas about choice, flexibility and autonomy. While open spaces express and authorise different teaching practice, these versions of school and classrooms present challenges to teaching routines and practice. This paper examines how teachers adapt as they move into new school buildings designed…

  3. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  4. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  5. Architecture for robot intelligence

    NASA Technical Reports Server (NTRS)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  6. Mind and Language Architecture

    PubMed Central

    Logan, Robert K

    2010-01-01

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described. PMID:20922045

  7. Architecture, constraints, and behavior

    PubMed Central

    Doyle, John C.; Csete, Marie

    2011-01-01

    This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems. PMID:21788505

  8. Etruscan Divination and Architecture

    NASA Astrophysics Data System (ADS)

    Magli, Giulio

    The Etruscan religion was characterized by divination methods, aimed at interpreting the will of the gods. These methods were revealed by the gods themselves and written in the books of the Etrusca Disciplina. The books are lost, but parts of them are preserved in the accounts of later Latin sources. According to such traditions divination was tightly connected with the Etruscan cosmovision of a Pantheon distributed in equally spaced, specific sectors of the celestial realm. We explore here the possible reflections of such issues in the Etruscan architectural remains.

  9. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  10. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  11. Architectures of small satellite programs in developing countries

    NASA Astrophysics Data System (ADS)

    Wood, Danielle; Weigel, Annalisa

    2014-04-01

    Global participation in space activity is growing as satellite technology matures and spreads. Countries in Africa, Asia and Latin America are creating or reinvigorating national satellite programs. These countries are building local capability in space through technological learning. This paper analyzes implementation approaches in small satellite programs within developing countries. The study addresses diverse examples of approaches used to master, adapt, diffuse and apply satellite technology in emerging countries. The work focuses on government programs that represent the nation and deliver services that provide public goods such as environmental monitoring. An original framework developed by the authors examines implementation approaches and contextual factors using the concept of Systems Architecture. The Systems Architecture analysis defines the satellite programs as systems within a context which execute functions via forms in order to achieve stakeholder objectives. These Systems Architecture definitions are applied to case studies of six satellite projects executed by countries in Africa and Asia. The architectural models used by these countries in various projects reveal patterns in the areas of training, technical specifications and partnership style. Based on these patterns, three Archetypal Project Architectures are defined which link the contextual factors to the implementation approaches. The three Archetypal Project Architectures lead to distinct opportunities for training, capability building and end user services.

  12. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, P.C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  13. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  14. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  15. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  16. Data management system advanced architectures

    NASA Technical Reports Server (NTRS)

    Chevers, ED

    1991-01-01

    The topics relating to the Space Station Freedom (SSF) are presented in view graph form and include: (1) the data management system (DMS) concept; (2) DMS evolution rationale; (3) the DMS advance architecture task; (4) DMS group support for Ames payloads; (5) DMS testbed development; (6) the DMS architecture task status; (7) real time multiprocessor testbed; (8) networked processor performance; (9) and the DMS advance architecture task 1992 goals.

  17. Rutger's CAM2000 chip architecture

    NASA Technical Reports Server (NTRS)

    Smith, Donald E.; Hall, J. Storrs; Miyake, Keith

    1993-01-01

    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set.

  18. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  19. Demand Activated Manufacturing Architecture

    SciTech Connect

    Bender, T.R.; Zimmerman, J.J.

    2001-02-07

    Honeywell Federal Manufacturing & Technologies (FM&T) engineers John Zimmerman and Tom Bender directed separate projects within this CRADA. This Project Accomplishments Summary contains their reports independently. Zimmerman: In 1998 Honeywell FM&T partnered with the Demand Activated Manufacturing Architecture (DAMA) Cooperative Business Management Program to pilot the Supply Chain Integration Planning Prototype (SCIP). At the time, FM&T was developing an enterprise-wide supply chain management prototype called the Integrated Programmatic Scheduling System (IPSS) to improve the DOE's Nuclear Weapons Complex (NWC) supply chain. In the CRADA partnership, FM&T provided the IPSS technical and business infrastructure as a test bed for SCIP technology, and this would provide FM&T the opportunity to evaluate SCIP as the central schedule engine and decision support tool for IPSS. FM&T agreed to do the bulk of the work for piloting SCIP. In support of that aim, DAMA needed specific DOE Defense Programs opportunities to prove the value of its supply chain architecture and tools. In this partnership, FM&T teamed with Sandia National Labs (SNL), Division 6534, the other DAMA partner and developer of SCIP. FM&T tested SCIP in 1998 and 1999. Testing ended in 1999 when DAMA CRADA funding for FM&T ceased. Before entering the partnership, FM&T discovered that the DAMA SCIP technology had an array of applications in strategic, tactical, and operational planning and scheduling. At the time, FM&T planned to improve its supply chain performance by modernizing the NWC-wide planning and scheduling business processes and tools. The modernization took the form of a distributed client-server planning and scheduling system (IPSS) for planners and schedulers to use throughout the NWC on desktops through an off-the-shelf WEB browser. The planning and scheduling process within the NWC then, and today, is a labor-intensive paper-based method that plans and schedules more than 8,000 shipped parts

  20. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  1. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  2. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  3. Adaptive Attitude Control of the Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Muse, Jonathan

    2010-01-01

    An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.

  4. 9. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) Annotated lithograph on paper. Standard plan used for construction of Commissary Sergeants Quarters, 1876. PLAN, FRONT AND SIDE ELEVATIONS, SECTION - Fort Myer, Commissary Sergeant's Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  5. The Architecture of Exoplanets

    NASA Astrophysics Data System (ADS)

    Hatzes, Artie P.

    2016-05-01

    Prior to the discovery of exoplanets our expectations of their architecture were largely driven by the properties of our solar system. We expected giant planets to lie in the outer regions and rocky planets in the inner regions. Planets should probably only occupy orbital distances 0.3-30 AU from the star. Planetary orbits should be circular, prograde and in the same plane. The reality of exoplanets have shattered these expectations. Jupiter-mass, Neptune-mass, Superearths, and even Earth-mass planets can orbit within 0.05 AU of the stars, sometimes with orbital periods of less than one day. Exoplanetary orbits can be eccentric, misaligned, and even in retrograde orbits. Radial velocity surveys gave the first hints that the occurrence rate increases with decreasing mass. This was put on a firm statistical basis with the Kepler mission that clearly demonstrated that there were more Neptune- and Superearth-sized planets than Jupiter-sized planets. These are often in multiple, densely packed systems where the planets all orbit within 0.3 AU of the star, a result also suggested by radial velocity surveys. Exoplanets also exhibit diversity along the main sequence. Massive stars tend to have a higher frequency of planets ( ≈ 20-25 %) that tend to be more massive ( M≈ 5-10 M_{Jup}). Giant planets around low mass stars are rare, but these stars show an abundance of small (Neptune and Superearth) planets in multiple systems. Planet formation is also not restricted to single stars as the Kepler mission has discovered several circumbinary planets. Although we have learned much about the architecture of planets over the past 20 years, we know little about the census of small planets at relatively large ( a>1 AU) orbital distances. We have yet to find a planetary system that is analogous to our own solar system. The question of how unique are the properties of our own solar system remains unanswered. Advancements in the detection methods of small planets over a wide range

  6. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  7. MSAT network architecture

    NASA Technical Reports Server (NTRS)

    Davies, N. G.; Skerry, B.

    1990-01-01

    The Mobile Satellite (MSAT) communications system will support mobile voice and data services using circuit switched and packet switched facilities with interconnection to the public switched telephone network and private networks. Control of the satellite network will reside in a Network Control System (NCS) which is being designed to be extremely flexible to provide for the operation of the system initially with one multi-beam satellite, but with capability to add additional satellites which may have other beam configurations. The architecture of the NCS is described. The signalling system must be capable of supporting the protocols for the assignment of circuits for mobile public telephone and private network calls as well as identifying packet data networks. The structure of a straw-man signalling system is discussed.

  8. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  9. BioArchitecture

    PubMed Central

    Gunning, Peter

    2012-01-01

    BioArchitecture is a term used to describe the organization and regulation of biological space. It applies to the principles which govern the structure of molecules, polymers and mutiprotein complexes, organelles, membranes and their organization in the cytoplasm and the nucleus. It also covers the integration of cells into their three dimensional environment at the level of cell-matrix, cell-cell interactions, integration into tissue/organ structure and function and finally into the structure of the organism. This review will highlight studies at all these levels which are providing a new way to think about the relationship between the organization of biological space and the function of biological systems. PMID:23267413

  10. The architecture of personality.

    PubMed

    Cervone, David

    2004-01-01

    This article presents a theoretical framework for analyzing psychological systems that contribute to the variability, consistency, and cross-situational coherence of personality functioning. In the proposed knowledge-and-appraisal personality architecture (KAPA), personality structures and processes are delineated by combining 2 principles: distinctions (a) between knowledge structures and appraisal processes and (b) among intentional cognitions with varying directions of fit, with the latter distinction differentiating among beliefs, evaluative standards, and aims. Basic principles of knowledge activation and use illuminate relations between knowledge and appraisal, yielding a synthetic account of personality structures and processes. Novel empirical data illustrate the heuristic value of the knowledge/appraisal distinction by showing how self-referent and situational knowledge combine to foster cross-situational coherence in appraisals of self-efficacy. PMID:14756593

  11. Functional Biomimetic Architectures

    NASA Astrophysics Data System (ADS)

    Levine, Paul M.

    N-substituted glycine oligomers, or 'peptoids,' are a class of sequence--specific foldamers composed of tertiary amide linkages, engendering proteolytic stability and enhanced cellular permeability. Peptoids are notable for their facile synthesis, sequence diversity, and ability to fold into distinct secondary structures. In an effort to establish new functional peptoid architectures, we utilize the copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC) reaction to generate peptidomimetic assemblies bearing bioactive ligands that specifically target and modulate Androgen Receptor (AR) activity, a major therapeutic target for prostate cancer. Additionally, we explore chemical ligation protocols to generate semi-synthetic hybrid biomacromolecules capable of exhibiting novel structures and functions not accessible to fully biosynthesized proteins.

  12. Power Systems Control Architecture

    SciTech Connect

    James Davidson

    2005-01-01

    A diagram provided in the report depicts the complexity of the power systems control architecture used by the national power structure. It shows the structural hierarchy and the relationship of the each system to those other systems interconnected to it. Each of these levels provides a different focus for vulnerability testing and has its own weaknesses. In evaluating each level, of prime concern is what vulnerabilities exist that provide a path into the system, either to cause the system to malfunction or to take control of a field device. An additional vulnerability to consider is can the system be compromised in such a manner that the attacker can obtain critical information about the system and the portion of the national power structure that it controls.

  13. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  14. Mars Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Jordan, James F.; Miller, Sylvia L.

    2000-01-01

    The architecture of NASA's program of robotic Mars exploration missions received an intense scrutiny during the summer months of 1998. We present here the results of that scrutiny, and describe a list of Mars exploration missions which are now being proposed by the nation's space agency. The heart of the new program architecture consists of missions which will return samples of Martian rocks and soil back to Earth for analysis. A primary scientific goal for these missions is to understand Mars as a possible abode of past or present life. The current level of sophistication for detecting markers of biological processes and fossil or extant life forms is much higher in Earth-based laboratories than possible with remotely deployed instrumentation, and will remain so for at least the next decade. Hence, bringing Martian samples back to Earth is considered the best way to search for the desired evidence. A Mars sample return mission takes approximately three years to complete. Transit from Earth to Mars requires almost a single year. After a lapse of time of almost a year at Mars, during which orbital and surface operations can take place, and the correct return launch energy constraints are met, a Mars-to-Earth return flight can be initiated. This return leg also takes approximately one year. Opportunities to launch these 3-year sample return missions occur about every 2 years. The figure depicts schedules for flights to and from Mars for Earth launches in 2003, 2005, 2007 and 2009. Transits for less than 180 deg flight angle, measured from the sun, and more than 180 deg are both shown.

  15. Secure Storage Architectures

    SciTech Connect

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine; Koch, Scott M; Naughton, III, Thomas J; Pogge, James R; Scott, Stephen L; Shipman, Galen M; Sorrillo, Lawrence

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  16. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  17. SpaceWire Architectures: Present and Future

    NASA Technical Reports Server (NTRS)

    Rakow, Glen Parker

    2006-01-01

    A viewgraph presentation on current and future spacewire architectures is shown. The topics include: 1) Current Spacewire Architectures: Swift Data Flow; 2) Current SpaceWire Architectures : LRO Data Flow; 3) Current Spacewire Architectures: JWST Data Flow; 4) Current SpaceWire Architectures; 5) Traditional Systems; 6) Future Systems; 7) Advantages; and 8) System Engineer Toolkit.

  18. Architectural Portfolio 2001: Main Winners.

    ERIC Educational Resources Information Center

    American School & University, 2001

    2001-01-01

    Presents descriptions and photographs of the following two American School and University Architectural Portfolio main winners for 2001: Chesterton, Indiana's Chesterton High School and Lied Library at the University of Nevada, Las Vegas. Included are each project's vital statistics, the architectural firm involved, and a list of designers.(GR)

  19. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  20. Interior Design in Architectural Education

    ERIC Educational Resources Information Center

    Gurel, Meltem O.; Potthoff, Joy K.

    2006-01-01

    The domain of interiors constitutes a point of tension between practicing architects and interior designers. Design of interior spaces is a significant part of architectural profession. Yet, to what extent does architectural education keep pace with changing demands in rendering topics that are identified as pertinent to the design of interiors?…

  1. Full-Scale Flight Research Testbeds: Adaptive and Intelligent Control

    NASA Technical Reports Server (NTRS)

    Pahle, Joe W.

    2008-01-01

    This viewgraph presentation describes the adaptive and intelligent control methods used for aircraft survival. The contents include: 1) Motivation for Adaptive Control; 2) Integrated Resilient Aircraft Control Project; 3) Full-scale Flight Assets in Use for IRAC; 4) NASA NF-15B Tail Number 837; 5) Gen II Direct Adaptive Control Architecture; 6) Limited Authority System; and 7) 837 Flight Experiments. A simulated destabilization failure analysis along with experience and lessons learned are also presented.

  2. Space Elevators Preliminary Architectural View

    NASA Astrophysics Data System (ADS)

    Pullum, L.; Swan, P. A.

    Space Systems Architecture has been expanded into a process by the US Department of Defense for their large scale systems of systems development programs. This paper uses the steps in the process to establishes a framework for Space Elevator systems to be developed and provides a methodology to manage complexity. This new approach to developing a family of systems is based upon three architectural views: Operational View OV), Systems View (SV), and Technical Standards View (TV). The top level view of the process establishes the stages for the development of the first Space Elevator and is called Architectural View - 1, Overview and Summary. This paper will show the guidelines and steps of the process while focusing upon components of the Space Elevator Preliminary Architecture View. This Preliminary Architecture View is presented as a draft starting point for the Space Elevator Project.

  3. Mission Architecture Comparison for Human Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Geffre, Jim; Robertson, Ed; Lenius, Jon

    2006-01-01

    The Vision for Space Exploration outlines a bold new national space exploration policy that holds as one of its primary objectives the extension of human presence outward into the Solar System, starting with a return to the Moon in preparation for the future exploration of Mars and beyond. The National Aeronautics and Space Administration is currently engaged in several preliminary analysis efforts in order to develop the requirements necessary for implementing this objective in a manner that is both sustainable and affordable. Such analyses investigate various operational concepts, or mission architectures , by which humans can best travel to the lunar surface, live and work there for increasing lengths of time, and then return to Earth. This paper reports on a trade study conducted in support of NASA s Exploration Systems Mission Directorate investigating the relative merits of three alternative lunar mission architecture strategies. The three architectures use for reference a lunar exploration campaign consisting of multiple 90-day expeditions to the Moon s polar regions, a strategy which was selected for its high perceived scientific and operational value. The first architecture discussed incorporates the lunar orbit rendezvous approach employed by the Apollo lunar exploration program. This concept has been adapted from Apollo to meet the particular demands of a long-stay polar exploration campaign while assuring the safe return of crew to Earth. Lunar orbit rendezvous is also used as the baseline against which the other alternate concepts are measured. The first such alternative, libration point rendezvous, utilizes the unique characteristics of the cislunar libration point instead of a low altitude lunar parking orbit as a rendezvous and staging node. Finally, a mission strategy which does not incorporate rendezvous after the crew ascends from the Moon is also studied. In this mission strategy, the crew returns directly to Earth from the lunar surface, and is

  4. A neuro-fuzzy architecture for real-time applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song

    1992-01-01

    Neural networks and fuzzy expert systems perform the same task of functional mapping using entirely different approaches. Each approach has certain unique features. The ability to learn specific input-output mappings from large input/output data possibly corrupted by noise and the ability to adapt or continue learning are some important features of neural networks. Fuzzy expert systems are known for their ability to deal with fuzzy information and incomplete/imprecise data in a structured, logical way. Since both of these techniques implement the same task (that of functional mapping--we regard 'inferencing' as one specific category under this class), a fusion of the two concepts that retains their unique features while overcoming their individual drawbacks will have excellent applications in the real world. In this paper, we arrive at a new architecture by fusing the two concepts. The architecture has the trainability/adaptibility (based on input/output observations) property of the neural networks and the architectural features that are unique to fuzzy expert systems. It also does not require specific information such as fuzzy rules, defuzzification procedure used, etc., though any such information can be integrated into the architecture. We show that this architecture can provide better performance than is possible from a single two or three layer feedforward neural network. Further, we show that this new architecture can be used as an efficient vehicle for hardware implementation of complex fuzzy expert systems for real-time applications. A numerical example is provided to show the potential of this approach.

  5. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue

  6. Connector adapter

    NASA Technical Reports Server (NTRS)

    Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  7. Adaptive VFH

    NASA Astrophysics Data System (ADS)

    Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi

    2011-10-01

    This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.

  8. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  9. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  10. Adaptive antennas

    NASA Astrophysics Data System (ADS)

    Barton, P.

    1987-04-01

    The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.

  11. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware. PMID:22356955

  12. Adaptive holographic implementation of a neural network

    NASA Astrophysics Data System (ADS)

    Downie, John D.; Hine, Butler P., III; Reid, Max B.

    1990-07-01

    A holographic implementation for neural networks is proposed and demonstrated as an alternative to the optical matrix-vector multiplier architecture. In comparison, the holographic architecture makes more efficient use of the system space-bandwidth product for certain types of neural networks. The principal network component is a thermoplastic hologram, used to provide both interconnection weights and beam redirection. Given the updatable nature of this type of hologram, adaptivity or network learning is possible in the optical system. Two networks with fixed weights are experimentally implemented and verified, and for one of these examples we demonstrate the advantage of the holographic implementation with respect to the matrix-vector processor.

  13. Strategic Adaptation of SCA for STRS

    NASA Technical Reports Server (NTRS)

    Quinn, Todd; Kacpura, Thomas

    2007-01-01

    The Space Telecommunication Radio System (STRS) architecture is being developed to provide a standard framework for future NASA space radios with greater degrees of interoperability and flexibility to meet new mission requirements. The space environment imposes unique operational requirements with restrictive size, weight, and power constraints that are significantly smaller than terrestrial-based military communication systems. With the harsh radiation environment of space, the computing and processing resources are typically one or two generations behind current terrestrial technologies. Despite these differences, there are elements of the SCA that can be adapted to facilitate the design and implementation of the STRS architecture.

  14. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this

  15. Investigation of multigauge architectures

    SciTech Connect

    Yang, C.

    1987-01-01

    Almost every computer architect dreams of achieving high system performance with low implementation costs. A multigauge machine can reconfigure its data-path width, provide parallelism, achieve better resource utilization, and sometimes can trade computational precision for increased speed. A simple experimental method is used here to capture the main characteristics of multigauging. The measurements indicate evidence of near-optimal speedups. Adapting these ideas in designing parallel processors incurs low costs and provides flexibility. Several operational aspects of designing a multigauge machine are discussed as well. Thus, this research reports the technical, economical, and operational feasibility studies of multigauging.

  16. Medicaid Information Technology Architecture: An Overview

    PubMed Central

    Friedman, Richard H.

    2006-01-01

    The Medicaid Information Technology Architecture (MITA) is a roadmap and toolkit for States to transform their Medicaid Management Information System (MMIS) into an enterprise-wide, beneficiary-centric system. MITA will enable State Medicaid agencies to align their information technology (IT) opportunities with their evolving business needs. It also addresses long-standing issues of interoperability, adaptability, and data sharing, including clinical data, across organizational boundaries by creating models based on nationally accepted technical standards. Perhaps most significantly, MITA allows State Medicaid Programs to actively participate in the DHHS Secretary's vision of a transparent health care market that utilizes electronic health records (EHRs), ePrescribing and personal health records (PHRs). PMID:17427840

  17. Planetary cubesats - mission architectures

    NASA Astrophysics Data System (ADS)

    Bousquet, Pierre W.; Ulamec, Stephan; Jaumann, Ralf; Vane, Gregg; Baker, John; Clark, Pamela; Komarek, Tomas; Lebreton, Jean-Pierre; Yano, Hajime

    2016-07-01

    Miniaturisation of technologies over the last decade has made cubesats a valid solution for deep space missions. For example, a spectacular set 13 cubesats will be delivered in 2018 to a high lunar orbit within the frame of SLS' first flight, referred to as Exploration Mission-1 (EM-1). Each of them will perform autonomously valuable scientific or technological investigations. Other situations are encountered, such as the auxiliary landers / rovers and autonomous camera that will be carried in 2018 to asteroid 1993 JU3 by JAXA's Hayabusas 2 probe, and will provide complementary scientific return to their mothership. In this case, cubesats depend on a larger spacecraft for deployment and other resources, such as telecommunication relay or propulsion. For both situations, we will describe in this paper how cubesats can be used as remote observatories (such as NEO detection missions), as technology demonstrators, and how they can perform or contribute to all steps in the Deep Space exploration sequence: Measurements during Deep Space cruise, Body Fly-bies, Body Orbiters, Atmospheric probes (Jupiter probe, Venus atmospheric probes, ..), Static Landers, Mobile landers (such as balloons, wheeled rovers, small body rovers, drones, penetrators, floating devices, …), Sample Return. We will elaborate on mission architectures for the most promising concepts where cubesat size devices offer an advantage in terms of affordability, feasibility, and increase of scientific return.

  18. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  19. Lunar Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Perino, Maria Antonietta

    The international space exploration plans foresee in the next decades multiple robotic and human missions to Moon and robotic missions to Mars, Phobos and other destinations. Notably the US has since the announcement of the US space exploration vision by President G. W. Bush in 2004 made significant progress in the further definition of its exploration programme focusing in the next decades in particular on human missions to Moon. Given the highly demanding nature of these missions, different initiatives have been recently taken at international level to discuss how the lunar exploration missions currently planned at national level could fit in a coordinate roadmap and contribute to lunar exploration. Thales Alenia Space - Italia is leading 3 studies for the European Space Agency focus on the analysis of the transportation, in-space and surface architectures required to meet ESA provided stakeholders exploration objectives and requirements. Main result of this activity is the identification of European near-term priorities for exploration missions and European long-term priorities for capability and technology developments related to planetary exploration missions. This paper will present the main studies' results drawing a European roadmap for exploration missions and capability and technology developments related to lunar exploration infrastructure development, taking into account the strategic and programmatic indications for exploration coming from ESA as well as the international exploration context.

  20. Superconducting Bolometer Array Architectures

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Chervenak, Jay; Irwin, Kent; Moseley, S. Harvey; Shafer, Rick; Staguhn, Johannes; Wollack, Ed; Oegerle, William (Technical Monitor)

    2002-01-01

    The next generation of far-infrared and submillimeter instruments require large arrays of detectors containing thousands of elements. These arrays will necessarily be multiplexed, and superconducting bolometer arrays are the most promising present prospect for these detectors. We discuss our current research into superconducting bolometer array technologies, which has recently resulted in the first multiplexed detections of submillimeter light and the first multiplexed astronomical observations. Prototype arrays containing 512 pixels are in production using the Pop-Up Detector (PUD) architecture, which can be extended easily to 1000 pixel arrays. Planar arrays of close-packed bolometers are being developed for the GBT (Green Bank Telescope) and for future space missions. For certain applications, such as a slewed far-infrared sky survey, feedhorncoupling of a large sparsely-filled array of bolometers is desirable, and is being developed using photolithographic feedhorn arrays. Individual detectors have achieved a Noise Equivalent Power (NEP) of -10(exp 17) W/square root of Hz at 300mK, but several orders of magnitude improvement are required and can be reached with existing technology. The testing of such ultralow-background detectors will prove difficult, as this requires optical loading of below IfW. Antenna-coupled bolometer designs have advantages for large format array designs at low powers due to their mode selectivity.

  1. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    PubMed

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.

  2. The architectural design of networks of protein domain architectures.

    PubMed

    Hsu, Chia-Hsin; Chen, Chien-Kuo; Hwang, Ming-Jing

    2013-08-23

    Protein domain architectures (PDAs), in which single domains are linked to form multiple-domain proteins, are a major molecular form used by evolution for the diversification of protein functions. However, the design principles of PDAs remain largely uninvestigated. In this study, we constructed networks to connect domain architectures that had grown out from the same single domain for every single domain in the Pfam-A database and found that there are three main distinctive types of these networks, which suggests that evolution can exploit PDAs in three different ways. Further analysis showed that these three different types of PDA networks are each adopted by different types of protein domains, although many networks exhibit the characteristics of more than one of the three types. Our results shed light on nature's blueprint for protein architecture and provide a framework for understanding architectural design from a network perspective.

  3. A new architecture for fast ultrasound imaging

    SciTech Connect

    Cruza, J. F.; Camacho, J.; Moreno, J. M.; Medina, L.

    2014-02-18

    Some ultrasound imaging applications require high frame rate, for example 3D imaging and automated inspections of large components. Being the signal-processing throughput of the system the main bottleneck, parallel beamforming is required to achieve hundreds to thousands of images per second. Simultaneous A-scan line beamforming in all active channels is required to reach the intended high frame rate. To this purpose, a new parallel beamforming architecture that exploits the currently available processing resources available in state-of-the-art FPGAs is proposed. The work aims to get the optimal resource usage, high scalability and flexibility for different applications. To achieve these goals, the basic beamforming function is reformulated to be adapted to the DSP-cell architecture of state-of-the-art FPGAs. This allows performing simultaneous dynamic focusing on multiple A-scan lines. Some realistic examples are analyzed, evaluating resource requirements and maximum operating frequency. For example, a 128-channel system, with 128 scan lines and acquiring at 20 MSPS, can be built with 4 mid-range FPGAs, achieving up to 18000 frames per second, just limited by the maximum PRF. The gold standard Synthetic Transmit Aperture method (also called Total Focusing Method) can be carried out in real time at a processing rate of 140 high-resolution images per second (16 cm depth on steel)

  4. Thermal Management Architecture for Future Responsive Spacecraft

    NASA Astrophysics Data System (ADS)

    Bugby, D.; Zimbeck, W.; Kroliczek, E.

    2009-03-01

    This paper describes a novel thermal design architecture that enables satellites to be conceived, configured, launched, and operationally deployed very quickly. The architecture has been given the acronym SMARTS for Satellite Modular and Reconfigurable Thermal System and it involves four basic design rules: modest radiator oversizing, maximum external insulation, internal isothermalization and radiator heat flow modulation. The SMARTS philosophy is being developed in support of the DoD Operationally Responsive Space (ORS) initiative which seeks to drastically improve small satellite adaptability, deployability, and design flexibility. To illustrate the benefits of the philosophy for a prototypical multi-paneled small satellite, the paper describes a SMARTS thermal control system implementation that uses: panel-to-panel heat conduction, intra-panel heat pipe isothermalization, radiator heat flow modulation via a thermoelectric cooler (TEC) cold-biased loop heat pipe (LHP) and maximum external multi-layer insulation (MLI). Analyses are presented that compare the traditional "cold-biasing plus heater power" passive thermal design approach to the SMARTS approach. Plans for a 3-panel SMARTS thermal test bed are described. Ultimately, the goal is to incorporate SMARTS into the design of future ORS satellites, but it is also possible that some aspects of SMARTS technology could be used to improve the responsiveness of future NASA spacecraft. [22 CFR 125.4(b)(13) applicable

  5. Systolic architecture for heirarchical clustering

    SciTech Connect

    Ku, L.C.

    1984-01-01

    Several hierarchical clustering methods (including single-linkage complete-linkage, centroid, and absolute overlap methods) are reviewed. The absolute overlap clustering method is selected for the design of systolic architecture mainly due to its simplicity. Two versions of systolic architectures for the absolute overlap hierarchical clustering algorithm are proposed: one-dimensional version that leads to the development of a two dimensional version which fully takes advantage of the underlying data structure of the problems. The two dimensional systolic architecture can achieve a time complexity of O(m + n) in comparison with the conventional computer implementation of a time complexity of O(m/sup 2*/n).

  6. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  7. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  8. Telemedicine system interoperability architecture: concept description and architecture overview.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  9. The genomics of organismal diversification illuminated by adaptive radiations.

    PubMed

    Berner, Daniel; Salzburger, Walter

    2015-09-01

    Adaptive radiation is the rapid and extensive ecological diversification of an organismal lineage to generate both phenotypic disparity (divergence) and similarity (convergence). Demonstrating particularly clear evidence of the power of natural selection, adaptive radiations serve as outstanding systems for studying the mechanisms of evolution. We review how the first wave of genomic investigation across major archetypal adaptive radiations has started to shed light on the molecular basis of adaptive diversification. Notably, these efforts have not yet identified consistent features of genomic architecture that promote diversification. However, access to a pool of ancient adaptive variation via genetic exchange emerges as an important driver of adaptive radiation. We conclude by highlighting avenues for future research on adaptive radiations, including the discovery of 'adaptation genes' based on genome scans using replicate convergent populations.

  10. The IVOA Architecture

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Gaudet, S.; IVOA Technical Coordination Group

    2012-09-01

    Astronomy produces large amounts of data of many kinds, coming from various sources: science space missions, ground based telescopes, theoretical models, compilation of results, etc. These data and associated processing services are made available via the Internet by "providers", usually large data centres or smaller teams (see Figure 1). The "consumers", be they individual researchers, research teams or computer systems, access these services to do their science. However, inter-connection amongst all these services and between providers and consumers is usually not trivial. The Virtual Observatory (VO) is the necessary "middle layer" framework enabling interoperability between all these providers and consumers in a seamless and transparent manner. Like the web which enables end users and machines to access transparently documents and services wherever and however they are stored, the VO enables the astronomy community to access data and service resources wherever and however they are provided. Over the last decade, the International Virtual Observatory Alliance (IVOA) has been defining various standards to build the VO technical framework for the providers to share their data and services ("Sharing"), and to allow users to find ("Finding") these resources, to get them ("Getting") and to use them ("Using"). To enable these functionalities, the definition of some core astronomically-oriented standards ("VO Core") has also been necessary. This paper will present the official and current IVOA Architecture[1], describing the various building blocks of the VO framework (see Figure 2) and their relation to all existing and in-progress IVOA standards. Additionally, it will show examples of these standards in action, connecting VO "consumers" to VO "providers".

  11. Project Integration Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  12. RASSP signal processing architectures

    NASA Astrophysics Data System (ADS)

    Shirley, Fred; Bassett, Bob; Letellier, J. P.

    1995-06-01

    display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.

  13. Dynamic Information Architecture System

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) typemore » map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.« less

  14. The Mothership Mission Architecture

    NASA Astrophysics Data System (ADS)

    Ernst, S. M.; DiCorcia, J. D.; Bonin, G.; Gump, D.; Lewis, J. S.; Foulds, C.; Faber, D.

    2015-12-01

    The Mothership is considered to be a dedicated deep space carrier spacecraft. It is currently being developed by Deep Space Industries (DSI) as a mission concept that enables a broad participation in the scientific exploration of small bodies - the Mothership mission architecture. A Mothership shall deliver third-party nano-sats, experiments and instruments to Near Earth Asteroids (NEOs), comets or moons. The Mothership service includes delivery of nano-sats, communication to Earth and visuals of the asteroid surface and surrounding area. The Mothership is designed to carry about 10 nano-sats, based upon a variation of the Cubesat standard, with some flexibility on the specific geometry. The Deep Space Nano-Sat reference design is a 14.5 cm cube, which accommodates the same volume as a traditional 3U CubeSat. To reduce cost, Mothership is designed as a secondary payload aboard launches to GTO. DSI is offering slots for nano-sats to individual customers. This enables organizations with relatively low operating budgets to closely examine an asteroid with highly specialized sensors of their own choosing and carry out experiments in the proximity of or on the surface of an asteroid, while the nano-sats can be built or commissioned by a variety of smaller institutions, companies, or agencies. While the overall Mothership mission will have a financial volume somewhere between a European Space Agencies' (ESA) S- and M-class mission for instance, it can be funded through a number of small and individual funding sources and programs, hence avoiding the processes associated with traditional space exploration missions. DSI has been able to identify a significant interest in the planetary science and nano-satellite communities.

  15. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  16. Transverse pumped laser amplifier architecture

    SciTech Connect

    Bayramian, Andrew James; Manes, Kenneth R.; Deri, Robert; Erlandson, Alvin; Caird, John; Spaeth, Mary L.

    2015-05-19

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  17. Architecture and the Information Revolution.

    ERIC Educational Resources Information Center

    Driscoll, Porter; And Others

    1982-01-01

    Traces how technological changes affect the architecture of the workplace. Traces these effects from the industrial revolution up through the computer revolution. Offers suggested designs for the computerized office of today and tomorrow. (JM)

  18. Transverse pumped laser amplifier architecture

    DOEpatents

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  19. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  20. 16-point discrete Fourier transform based on the Radix-2 FFT algorithm implemented into cyclone FPGA as the UHECR trigger for horizontal air showers in the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Szadkowski, Z.

    2006-05-01

    Extremely rare flux of UHERC requires sophisticated detection techniques. Standard methods oriented on the typical events may not be sensitive enough to capture rare events, crucial to fix a discrepancy in the current data or to confirm/reject some new hypothesis. Currently used triggers in water Cherenkov tanks in the Pierre Auger surface detector, which select events above some amplitude thresholds or investigate a length of traces are not optimized to the horizontal and very inclined showers, interesting as potentially generated by neutrinos. Those showers could be triggered using their signatures: i.e. a curvature of the shower front, transformed on the rise time of traces or muon component giving early peak for "old" showers. Currently available powerful and cost-effective FPGAs provide sufficient resources to implement new triggers not available in the past. The paper describes the implementation proposal of 16-point discrete Fourier transform based on the Radix-2 FFT algorithm into Altera Cyclone FPGA, used in the 3rd generation of the surface detector trigger. All complex coefficients are calculated online in heavy pipelined routines. The register performance ˜200 MHz and relatively low resources occupancy ˜2000 logic elements/channel for 10-bit resolution provide a powerful tool to trigger the events on the traces characteristic in the frequency domain. The FFT code has been successively merged to the code of the 1st surface selector level trigger of the Pierre Auger Observatory and is planned to be tested in real pampas environment.

  1. The architecture of FAIM-1

    SciTech Connect

    Anderson, J.M.; Coates, W.S.; Davis, A.L.; Hon, R.W.; Robinson, I.N.; Robison, S.V.; Stevens, K.S.

    1987-01-01

    This article describes a symbolic multiprocessing system called FAIM-1. FAIM-1 is a highly concurrent, general-purpose, symbolic accelerator for parallel AI symbolic computation. The paramount goal of the FAIM project is to produce an architecture that can be scaled to a configuration capable of performance improvements of two to three orders of magnitude over conventional architectures. In the design of FAIM-1, prime consideration was given to programmability, performance, extensibility, fault tolerance, and the cost-effective use of technology.

  2. Look and Do Ancient Egypt. Teacher's Manual: Primary Program, Ancient Egypt Art & Architecture [and] Workbook: The Art and Architecture of Ancient Egypt [and] K-4 Videotape. History through Art and Architecture.

    ERIC Educational Resources Information Center

    Luce, Ann Campbell

    This resource contains a teaching manual, reproducible student workbook, and color teaching poster, which were designed to accompany a 2-part, 34-minute videotape, but may be adapted for independent use. Part 1 of the program, "The Old Kingdom," explains Egyptian beliefs concerning life after death as evidenced in art, architecture and the…

  3. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization. PMID:10124636

  4. Effects of non-uniform windowing in a Rician-fading channel and simulation of adaptive automatic repeat request protocols

    NASA Astrophysics Data System (ADS)

    Kmiecik, Chris G.

    1990-06-01

    Two aspects of digital communication were investigated. In the first part, a Fast Fourier Transformation (FFT) based, M-ary frequency shift keying (FSK) receiver in a Rician-fading channel was analyzed to determine the benefits of non-uniform windowing of sampled received data. When a frequency offset occurs, non-uniform windowing provided better FFT magnitude separation. The improved dynamic range was balanced against a loss in detectability due to signal attenuation. With large frequency offset, the improved magnitude separation outweighed the loss in detectability. An analysis was carried out to determine what frequency deviation is necessary for non-uniform windowing to out-perform uniform windowing in a slow Rician-fading channel. Having established typical values of probability of bit errors, the second part of this thesis looked at improving throughput in a digital communications network by applying adaptive automatic repeat request (ARQ) protocols. The results of simulations of adaptive ARQ protocols with variable frame lengths is presented. By varying the frame length, improved throughput performance through all bit error rates was achieved.

  5. New CCD imagers for adaptive optics wavefront sensors

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Reich, Robert K.; Prigozhin, Ilya; Burke, Barry E.; Johnson, Robert

    2014-08-01

    We report on two recently developed charge-coupled devices (CCDs) for adaptive optics wavefront sensing, both designed to provide exceptional sensitivity (low noise and high quantum efficiency) in high-frame-rate low-latency readout applications. The first imager, the CCID75, is a back-illuminated 16-port 160×160-pixel CCD that has been demonstrated to operate at frame rates above 1,300 fps with noise of < 3 e-. We will describe the architecture of this CCD that enables this level of performance, present and discuss characterization data, and review additional design features that enable unique operating modes for adaptive optics wavefront sensing. We will also present an architectural overview and initial characterization data of a recently designed variation on the CCID75 architecture, the CCID82, which incorporates an electronic shutter to support adaptive optics using Rayleigh beacons.

  6. Non-Linear Pattern Formation in Bone Growth and Architecture

    PubMed Central

    Salmon, Phil

    2014-01-01

    The three-dimensional morphology of bone arises through adaptation to its required engineering performance. Genetically and adaptively bone travels along a complex spatiotemporal trajectory to acquire optimal architecture. On a cellular, micro-anatomical scale, what mechanisms coordinate the activity of osteoblasts and osteoclasts to produce complex and efficient bone architectures? One mechanism is examined here – chaotic non-linear pattern formation (NPF) – which underlies in a unifying way natural structures as disparate as trabecular bone, swarms of birds flying, island formation, fluid turbulence, and others. At the heart of NPF is the fact that simple rules operating between interacting elements, and Turing-like interaction between global and local signals, lead to complex and structured patterns. The study of “group intelligence” exhibited by swarming birds or shoaling fish has led to an embodiment of NPF called “particle swarm optimization” (PSO). This theoretical model could be applicable to the behavior of osteoblasts, osteoclasts, and osteocytes, seeing them operating “socially” in response simultaneously to both global and local signals (endocrine, cytokine, mechanical), resulting in their clustered activity at formation and resorption sites. This represents problem-solving by social intelligence, and could potentially add further realism to in silico computer simulation of bone modeling. What insights has NPF provided to bone biology? One example concerns the genetic disorder juvenile Pagets disease or idiopathic hyperphosphatasia, where the anomalous parallel trabecular architecture characteristic of this pathology is consistent with an NPF paradigm by analogy with known experimental NPF systems. Here, coupling or “feedback” between osteoblasts and osteoclasts is the critical element. This NPF paradigm implies a profound link between bone regulation and its architecture: in bone the architecture is the regulation. The former is the

  7. Non-linear pattern formation in bone growth and architecture.

    PubMed

    Salmon, Phil

    2014-01-01

    The three-dimensional morphology of bone arises through adaptation to its required engineering performance. Genetically and adaptively bone travels along a complex spatiotemporal trajectory to acquire optimal architecture. On a cellular, micro-anatomical scale, what mechanisms coordinate the activity of osteoblasts and osteoclasts to produce complex and efficient bone architectures? One mechanism is examined here - chaotic non-linear pattern formation (NPF) - which underlies in a unifying way natural structures as disparate as trabecular bone, swarms of birds flying, island formation, fluid turbulence, and others. At the heart of NPF is the fact that simple rules operating between interacting elements, and Turing-like interaction between global and local signals, lead to complex and structured patterns. The study of "group intelligence" exhibited by swarming birds or shoaling fish has led to an embodiment of NPF called "particle swarm optimization" (PSO). This theoretical model could be applicable to the behavior of osteoblasts, osteoclasts, and osteocytes, seeing them operating "socially" in response simultaneously to both global and local signals (endocrine, cytokine, mechanical), resulting in their clustered activity at formation and resorption sites. This represents problem-solving by social intelligence, and could potentially add further realism to in silico computer simulation of bone modeling. What insights has NPF provided to bone biology? One example concerns the genetic disorder juvenile Pagets disease or idiopathic hyperphosphatasia, where the anomalous parallel trabecular architecture characteristic of this pathology is consistent with an NPF paradigm by analogy with known experimental NPF systems. Here, coupling or "feedback" between osteoblasts and osteoclasts is the critical element. This NPF paradigm implies a profound link between bone regulation and its architecture: in bone the architecture is the regulation. The former is the emergent

  8. Non-linear pattern formation in bone growth and architecture.

    PubMed

    Salmon, Phil

    2014-01-01

    The three-dimensional morphology of bone arises through adaptation to its required engineering performance. Genetically and adaptively bone travels along a complex spatiotemporal trajectory to acquire optimal architecture. On a cellular, micro-anatomical scale, what mechanisms coordinate the activity of osteoblasts and osteoclasts to produce complex and efficient bone architectures? One mechanism is examined here - chaotic non-linear pattern formation (NPF) - which underlies in a unifying way natural structures as disparate as trabecular bone, swarms of birds flying, island formation, fluid turbulence, and others. At the heart of NPF is the fact that simple rules operating between interacting elements, and Turing-like interaction between global and local signals, lead to complex and structured patterns. The study of "group intelligence" exhibited by swarming birds or shoaling fish has led to an embodiment of NPF called "particle swarm optimization" (PSO). This theoretical model could be applicable to the behavior of osteoblasts, osteoclasts, and osteocytes, seeing them operating "socially" in response simultaneously to both global and local signals (endocrine, cytokine, mechanical), resulting in their clustered activity at formation and resorption sites. This represents problem-solving by social intelligence, and could potentially add further realism to in silico computer simulation of bone modeling. What insights has NPF provided to bone biology? One example concerns the genetic disorder juvenile Pagets disease or idiopathic hyperphosphatasia, where the anomalous parallel trabecular architecture characteristic of this pathology is consistent with an NPF paradigm by analogy with known experimental NPF systems. Here, coupling or "feedback" between osteoblasts and osteoclasts is the critical element. This NPF paradigm implies a profound link between bone regulation and its architecture: in bone the architecture is the regulation. The former is the emergent

  9. Thin nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth J. (Inventor); Hughes, Eli (Inventor)

    2009-01-01

    A thin nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  10. Thin, nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth (Inventor); Hughes, Eli (Inventor)

    2008-01-01

    A thin, nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  11. Thin, nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth (Inventor); Hughes, Eli (Inventor)

    2007-01-01

    A thin, nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  12. Adaptive holography for optical sensing applications

    NASA Astrophysics Data System (ADS)

    Residori, S.; Bortolozzo, U.; Peigné, A.; Molin, S.; Nouchi, P.; Dolfi, D.; Huignard, J. P.

    2016-03-01

    Adaptive holography is a promising method for high sensitivity phase modulation measurements in the presence of slow perturbations from the environment. The technique is based on the use of a nonlinear recombining medium, here an optically addressed spatial light modulator specifically realized to operate at 1.55 μm. Owing to the physical mechanisms involved, the interferometer adapts to slow phase variations within a range of 5-10 Hz, thus filtering out low frequency noise while transmitting higher frequency phase modulations. We present the basic principles of the adaptive interferometer and show that it can be used in association with a sensing fiber in order to detect phase modulations. Finally, a phase-OTDR architecture using the adaptive holographic interferometer is presented and shown to allows the detection of localized perturbations along the sensing fiber.

  13. Multicore Architecture-aware Scientific Applications

    SciTech Connect

    Srinivasa, Avinash

    2011-11-28

    Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a large scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.

  14. Bipartite memory network architectures for parallel processing

    SciTech Connect

    Smith, W.; Kale, L.V. . Dept. of Computer Science)

    1990-01-01

    Parallel architectures are boradly classified as either shared memory or distributed memory architectures. In this paper, the authors propose a third family of architectures, called bipartite memory network architectures. In this architecture, processors and memory modules constitute a bipartite graph, where each processor is allowed to access a small subset of the memory modules, and each memory module allows access from a small set of processors. The architecture is particularly suitable for computations requiring dynamic load balancing. The authors explore the properties of this architecture by examining the Perfect Difference set based topology for the graph. Extensions of this topology are also suggested.

  15. Architectural Analysis of Dynamically Reconfigurable Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  16. Adaptive heterogeneous multi-robot teams

    SciTech Connect

    Parker, L.E.

    1998-11-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail the experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  17. Adaptive synthetic vision

    NASA Astrophysics Data System (ADS)

    Julier, Simon J.; Brown, Dennis; Livingston, Mark A.; Thomas, Justin

    2006-05-01

    Through their ability to safely collect video and imagery from remote and potentially dangerous locations, UAVs have already transformed the battlespace. The effectiveness of this information can be greatly enhanced through synthetic vision. Given knowledge of the extrinsic and intrinsic parameters of the camera, synthetic vision superimposes spatially-registered computer graphics over the video feed from the UAV. This technique can be used to show many types of data such as landmarks, air corridors, and the locations of friendly and enemy forces. However, the effectiveness of a synthetic vision system strongly depends on the accuracy of the registration - if the graphics are poorly aligned with the real world they can be confusing, annoying, and even misleading. In this paper, we describe an adaptive approach to synthetic vision that modifies the way in which information is displayed depending upon the registration error. We describe an integrated software architecture that has two main components. The first component automatically calculates registration error based on information about the uncertainty in the camera parameters. The second component uses this information to modify, aggregate, and label annotations to make their interpretation as clear as possible. We demonstrate the use of this approach on some sample datasets.

  18. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  19. Selected Precepts in Lunar Architecture

    NASA Astrophysics Data System (ADS)

    Cohen, Marc M.

    2002-01-01

    This paper presents an overview of selected approaches to Lunar Architecture to describe the parameters of this design problem space. The paper identifies typologies of architecture based on Lunar site features, structural concepts and habitable functions. This paper develops an analysis of these architectures based on the NASA Habitats and Surface Construction Road Map (1997) in which there are three major types of surface construction: Class I) Preintegrated, Class 2) Assembled, Deployed, Erected or Inflated, and Class 3) Use of In Situ materials and site characteristics. Class 1 Architectures include the following. The Apollo Program was intended to extend to landing a 14 day base in enhanced Lunar Excursion Modules. The Air Force was the first to propose preintegrated cylindrical modules landed on the Lunar surface. The University of Wisconsin proposed building a module and hub system on the surface. Madhu Thangavelu proposed assembling such a module and hub base in orbit and then landing it intact on the moon . Class 2 Architectures include: The NASA 90 Day Study proposed an inflatable sphere of about 20m diameter for a lunar habitat. Jenine Abarbanel of Colorado State University proposed rectangular inflatable habitats, with lunar regolith as ballast on the flat top. Class 3 Architectures include: William Simon proposed a lunar base bored into a crater rim. Alice Eichold proposed a base within a crater ring. The paper presents a comparative characterization and analysis of these and other examples paradigms of proposed Lunar construction. It evaluates bath the architectures and the NASA Habitats and Surface Construction Road Map for how well they correlate to one another

  20. Ku-Band Data-Communication Adapter

    NASA Technical Reports Server (NTRS)

    Schadelbauer, Steve

    1995-01-01

    Data-communication adapter circuit on single printed-circuit board serves as general-purpose interface between personal computer and satellite communication system. Designed as direct interface with Ku-band data-communication system for payloads on space shuttle, also used with any radio-frequency transmission systems. Readily installed in almost any personal computer via widely used Industry Standard Architecture (ISA) bus.

  1. Exploration Space Suit Architecture: Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.

    2010-01-01

    This paper picks up where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars (Hill, Johnson, IEEEAC paper #1209) left off in the development of a space suit architecture that is modular in design and interfaces and could be reconfigured to meet the mission or during any given mission depending on the tasks or destination. This paper will walk though the continued development of a space suit system architecture, and how it should evolve to meeting the future exploration EVA needs of the United States space program. In looking forward to future US space exploration and determining how the work performed to date in the CxP and how this would map to a future space suit architecture with maximum re-use of technology and functionality, a series of thought exercises and analysis have provided a strong indication that the CxP space suit architecture is well postured to provide a viable solution for future exploration missions. Through the destination environmental analysis that is presented in this paper, the modular architecture approach provides the lowest mass, lowest mission cost for the protection of the crew given any human mission outside of low Earth orbit. Some of the studies presented here provide a look and validation of the non-environmental design drivers that will become every-increasingly important the further away from Earth humans venture and the longer they are away. Additionally, the analysis demonstrates a logical clustering of design environments that allows a very focused approach to technology prioritization, development and design that will maximize the return on investment independent of any particular program and provide architecture and design solutions for space suit systems in time or ahead of being required for any particular manned flight program in the future. The new approach to space suit design and interface definition the discussion will show how the architecture is very adaptable to programmatic and funding changes with

  2. Real-time control system for adaptive resonator

    SciTech Connect

    Flath, L; An, J; Brase, J; Hurd, R; Kartz, M; Sawvel, R; Silva, D

    2000-07-24

    Sustained operation of high average power solid-state lasers currently requires an adaptive resonator to produce the optimal beam quality. We describe the architecture of a real-time adaptive control system for correcting intra-cavity aberrations in a heat capacity laser. Image data collected from a wavefront sensor are processed and used to control phase with a high-spatial-resolution deformable mirror. Our controller takes advantage of recent developments in low-cost, high-performance processor technology. A desktop-based computational engine and object-oriented software architecture replaces the high-cost rack-mount embedded computers of previous systems.

  3. On-board multispectral classification study. Volume 2: Supplementary tasks. [adaptive control

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The operational tasks of the onboard multispectral classification study were defined. These tasks include: sensing characteristics for future space applications; information adaptive systems architectural approaches; data set selection criteria; and onboard functional requirements for interfacing with global positioning satellites.

  4. A framework for constructing adaptive and reconfigurable systems

    SciTech Connect

    Poirot, Pierre-Etienne; Nogiec, Jerzy; Ren, Shangping; /IIT, Chicago

    2007-05-01

    This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a system's functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, modeling and correction into separate modules to allow for easier customization of the adaptive behavior and flexibility in selecting implementation technologies.

  5. Lunar Navigation Architecture Design Considerations

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  6. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    NASA Astrophysics Data System (ADS)

    Solomon, D.; van Dijk, A.

    The "2002 ESA Lunar Architecture Workshop" (June 3-16) ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL) is the first-of-its-kind workshop for exploring the design of extra-terrestrial (infra) structures for human exploration of the Moon and Earth-like planets introducing 'architecture's current line of research', and adopting an architec- tural criteria. The workshop intends to inspire, engage and challenge 30-40 European masters students from the fields of aerospace engineering, civil engineering, archi- tecture, and art to design, validate and build models of (infra) structures for Lunar exploration. The workshop also aims to open up new physical and conceptual terrain for an architectural agenda within the field of space exploration. A sound introduc- tion to the issues, conditions, resources, technologies, and architectural strategies will initiate the workshop participants into the context of lunar architecture scenarios. In my paper and presentation about the development of the ideology behind this work- shop, I will comment on the following questions: * Can the contemporary architectural agenda offer solutions that affect the scope of space exploration? It certainly has had an impression on urbanization and colonization of previously sparsely populated parts of Earth. * Does the current line of research in architecture offer any useful strategies for com- bining scientific interests, commercial opportunity, and public space? What can be learned from 'state of the art' architecture that blends commercial and public pro- grammes within one location? * Should commercial 'colonisation' projects in space be required to provide public space in a location where all humans present are likely to be there in a commercial context? Is the wave in Koolhaas' new Prada flagship store just a gesture to public space, or does this new concept in architecture and shopping evolve the public space? * What can we learn about designing (infra-) structures on the Moon or any other

  7. Architectures for intelligent robots in the age of exploitation

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Sarkar, Saurabh; Mathur, Kovid; Tennety, Srinivas

    2009-01-01

    History shows that problems that cause human confusion often lead to inventions to solve the problems, which then leads to exploitation of the invention, creating a confusion-invention-exploitation cycle. Robotics, which started as a new type of universal machine implemented with a computer controlled mechanism in the 1960's, has progressed from an Age of Over-expectation, a Time of Nightmare, an Age of Realism, and is now entering the Age of Exploitation. The purpose of this paper is to propose architecture for the modern intelligent robot in which sensors permit adaptation to changes in the environment are combined with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. This ideal model may be compared to various controllers that have been implemented using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Several prototypes and simulations are considered in view of peta-computing. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications.

  8. Learning, memory, and the role of neural network architecture.

    PubMed

    Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M

    2011-06-01

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  9. An ICAI architecture for troubleshooting in complex, dynamic systems

    NASA Technical Reports Server (NTRS)

    Fath, Janet L.; Mitchell, Christine M.; Govindaraj, T.

    1990-01-01

    Ahab, an intelligent computer-aided instruction (ICAI) program, illustrates an architecture for simulator-based ICAI programs to teach troubleshooting in complex, dynamic environments. The architecture posits three elements of a computerized instructor: the task model, the student model, and the instructional module. The task model is a prescriptive model of expert performance that uses symptomatic and topographic search strategies to provide students with directed problem-solving aids. The student model is a descriptive model of student performance in the context of the task model. This student model compares the student and task models, critiques student performance, and provides interactive performance feedback. The instructional module coordinates information presented by the instructional media, the task model, and the student model so that each student receives individualized instruction. Concept and metaconcept knowledge that supports these elements is contained in frames and production rules, respectively. The results of an experimental evaluation are discussed. They support the hypothesis that training with an adaptive online system built using the Ahab architecture produces better performance than training using simulator practice alone, at least with unfamiliar problems. It is not sufficient to develop an expert strategy and present it to students using offline materials. The training is most effective if it adapts to individual student needs.

  10. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  11. Architecture assessment of HLLV candidates

    NASA Technical Reports Server (NTRS)

    Thompson, Walter E.

    1992-01-01

    Results of an architecture study of four Heavy Lift Launch Vehicles (HLLV) families are summarized, with attention given to civil, commercial, military and Space Exploration Initiative (SEI) applications. The Mars Exploration architecture is used as the SEI model baseline, and the architecture of each vehicle family is analyzed with respect to ground processing, launch operations, on-orbit operations, mission performance, and cost. For lunar missions, a 70-t earth-to-orbit (ETO) vehicle is shown to have definite cost advantages, with only small operational disadvantages, if the lunar program is small or medium in size. For Mars, a comparison of 150-t and 250-t ETO vehicles shows that little operational advantage is gained by going to the 250-t size.

  12. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture.

    PubMed

    González-Plaza, Juan J; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species.

  13. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    PubMed

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed

  14. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture.

    PubMed

    González-Plaza, Juan J; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species. PMID:26973682

  15. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture

    PubMed Central

    González-Plaza, Juan J.; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F.; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R.; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R.

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species. PMID:26973682

  16. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    PubMed

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed

  17. Bioarchitecture: bioinspired art and architecture--a perspective.

    PubMed

    Ripley, Renee L; Bhushan, Bharat

    2016-08-01

    Art and architecture can be an obvious choice to pair with science though historically this has not always been the case. This paper is an attempt to interact across disciplines, define a new genre, bioarchitecture, and present opportunities for further research, collaboration and professional cooperation. Biomimetics, or the copying of living nature, is a field that is highly interdisciplinary, involving the understanding of biological functions, structures and principles of various objects found in nature by scientists. Biomimetics can lead to biologically inspired design, adaptation or derivation from living nature. As applied to engineering, bioinspiration is a more appropriate term, involving interpretation, rather than direct copying. Art involves the creation of discrete visual objects intended by their creators to be appreciated by others. Architecture is a design practice that makes a theoretical argument and contributes to the discourse of the discipline. Bioarchitecture is a blending of art/architecture and biomimetics/bioinspiration, and incorporates a bioinspired design from the outset in all parts of the work at all scales. Herein, we examine various attempts to date of art and architecture to incorporate bioinspired design into their practice, and provide an outlook and provocation to encourage collaboration among scientists and designers, with the aim of achieving bioarchitecture.This article is part of the themed issue 'Bioinspired hierarchically structured surfaces for green science'. PMID:27354727

  18. Bioarchitecture: bioinspired art and architecture--a perspective.

    PubMed

    Ripley, Renee L; Bhushan, Bharat

    2016-08-01

    Art and architecture can be an obvious choice to pair with science though historically this has not always been the case. This paper is an attempt to interact across disciplines, define a new genre, bioarchitecture, and present opportunities for further research, collaboration and professional cooperation. Biomimetics, or the copying of living nature, is a field that is highly interdisciplinary, involving the understanding of biological functions, structures and principles of various objects found in nature by scientists. Biomimetics can lead to biologically inspired design, adaptation or derivation from living nature. As applied to engineering, bioinspiration is a more appropriate term, involving interpretation, rather than direct copying. Art involves the creation of discrete visual objects intended by their creators to be appreciated by others. Architecture is a design practice that makes a theoretical argument and contributes to the discourse of the discipline. Bioarchitecture is a blending of art/architecture and biomimetics/bioinspiration, and incorporates a bioinspired design from the outset in all parts of the work at all scales. Herein, we examine various attempts to date of art and architecture to incorporate bioinspired design into their practice, and provide an outlook and provocation to encourage collaboration among scientists and designers, with the aim of achieving bioarchitecture.This article is part of the themed issue 'Bioinspired hierarchically structured surfaces for green science'.

  19. Study of heterogeneous and reconfigurable architectures in the communication domain

    NASA Astrophysics Data System (ADS)

    Feldkaemper, H. T.; Blume, H.; Noll, T. G.

    2003-05-01

    One of the most challenging design issues for next generations of (mobile) communication systems is fulfilling the computational demands while finding an appropriate trade-off between flexibility and implementation aspects, especially power consumption. Flexibility of modern architectures is desirable, e.g. concerning adaptation to new standards and reduction of time-to-market of a new product. Typical target architectures for future communication systems include embedded FPGAs, dedicated macros as well as programmable digital signal and control oriented processor cores as each of these has its specific advantages. These will be integrated as a System-on-Chip (SoC). For such a heterogeneous architecture a design space exploration and an appropriate partitioning plays a crucial role. On the exemplary vehicle of a Viterbi decoder as frequently used in communication systems we show which costs in terms of ATE complexity arise implementing typical components on different types of architecture blocks. A factor of about seven orders of magnitude spans between a physically optimised implementation and an implementation on a programmable DSP kernel. An implementation on an embedded FPGA kernel is in between these two representing an attractive compromise with high flexibility and low power consumption. Extending this comparison to further components, it is shown quantitatively that the cost ratio between different implementation alternatives is closely related to the operation to be performed. This information is essential for the appropriate partitioning of heterogeneous systems.

  20. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    SciTech Connect

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot`s own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.

  1. Airport Surface Network Architecture Definition

    NASA Technical Reports Server (NTRS)

    Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.

    2006-01-01

    Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network

  2. ATMTN: a telemammography network architecture.

    PubMed

    Sheybani, Ehsan O; Sankar, Ravi

    2002-12-01

    One of the goals of the National Cancer Institute (NCI) to reach more than 80% of eligible women in mammography screening by the year 2000 yet remains as a challenge. In fact, a recent medical report reveals that while other types of cancer are experiencing negative growth, breast cancer has been the only one with a positive growth rate over the last few years. This is primarily due to the fact that 1) examination process is a complex and lengthy one and 2) it is not available to the majority of women who live in remote sites. Currently for mammography screening, women have to go to doctors or cancer centers/hospitals annually while high-risk patients may have to visit more often. One way to resolve these problems is by the use of advanced networking technologies and signal processing algorithms. On one hand, software modules can help detect, with high precision, true negatives (TN), while marking true positives (TP) for further investigation. Unavoidably, in this process some false negatives (FN) will be generated that are potentially life threatening; however, inclusion of the detection software improves the TP detection and, hence, reduces FNs drastically. Since TNs are the majority of examinations on a randomly selected population, this first step reduces the load on radiologists by a tremendous amount. On the other hand, high-speed networking equipment can accelerate the required clinic-lab connection and make detection, segmentation, and image enhancement algorithms readily available to the radiologists. This will bring the breast cancer care, caregiver, and the facilities to the patients and expand diagnostics and treatment to the remote sites. This research describes asynchronous transfer mode telemammography network (ATMTN) architecture for real-time, online screening, detection and diagnosis of breast cancer. ATMTN is a unique high-speed network integrated with automatic robust computer-assisted diagnosis-detection/digital signal processing (CAD

  3. Hybrid-Polarity SAR Architecture

    NASA Astrophysics Data System (ADS)

    Raney, R. K.; Freeman, A.

    2009-04-01

    A space-based synthetic aperture radar (SAR) designed to provide quantitative information on a global scale implies severe requirements to maximize coverage and to sustain reliable operational calibration. These requirements are best served by the hybrid-polarity architecture, in which the radar transmits in circular polarization, and receives on two orthogonal linear polarizations, coherently, retaining their relative phase. This paper reviews those advantages,summarizes key attributes of hybrid-polarity dual- and quadrature-polarized SARs including conditions under which the signal-to-noise ratio is conserved, and describes the evolution of this architecture from first principles.

  4. Frame architecture for video servers

    NASA Astrophysics Data System (ADS)

    Venkatramani, Chitra; Kienzle, Martin G.

    1999-11-01

    Video is inherently frame-oriented and most applications such as commercial video processing require to manipulate video in terms of frames. However, typical video servers treat videos as byte streams and perform random access based on approximate byte offsets to be supplied by the client. They do not provide frame or timecode oriented API which is essential for many applications. This paper describes a frame-oriented architecture for video servers. It also describes the implementation in the context of IBM's VideoCharger server. The later part of the paper describes an application that uses the frame architecture and provides fast and slow-motion scanning capabilities to the server.

  5. Software design by reusing architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay; Nii, H. Penny

    1992-01-01

    Abstraction fosters reuse by providing a class of artifacts that can be instantiated or customized to produce a set of artifacts meeting different specific requirements. It is proposed that significant leverage can be obtained by abstracting software system designs and the design process. The result of such an abstraction is a generic architecture and a set of knowledge-based, customization tools that can be used to instantiate the generic architecture. An approach for designing software systems based on the above idea are described. The approach is illustrated through an implemented example, and the advantages and limitations of the approach are discussed.

  6. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  7. Archibabel: Tracing the Writing Architecture Project in Architectural Education

    ERIC Educational Resources Information Center

    Lappin, Sarah A.; Erk, Gül Kaçmaz; Martire, Agustina

    2015-01-01

    Though much recent scholarship has investigated the potential of writing in creative practice (including visual arts, drama, even choreography), there are few models in the literature which discuss writing in the context of architectural education. This article aims to address this dearth of pedagogical research, analysing the cross-disciplinary…

  8. Impact of Enterprise Architecture on Architecture Agility and Coherence

    ERIC Educational Resources Information Center

    Abaas, Kanari

    2009-01-01

    IT has permeated to the very roots of organizations and has an ever increasingly important role in the achievement of overall corporate objectives and business strategies. This paper presents an approach for evaluating the impact of existing Enterprise Architecture (EA) implementations. The papers answers questions such as: What are the challenges…

  9. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  10. Self-Consistent Simulations of Inductively Coupled Discharges at Very Low Pressures Using a FFT Method for Calculating the Non-local Electron Conductivity for the General Case of a Non-Uniform Plasma

    NASA Astrophysics Data System (ADS)

    Polomarov, Oleg; Theodosiou, Constantine; Kaganovich, Igor

    2003-10-01

    A self-consistent system of equations for the kinetic description of non-local, non-uniform, nearly collisionless plasmas of low-pressure discharges is presented. The system consists of a non-local conductivity operator, and a kinetic equation for the electron distribution function (EEDF) averaged over fast electron bounce motions. A Fast Fourier Transform (FFT) method was applied to speed up the numerical simulations. The importance of accounting for the non-uniform plasma density profile in computing the current density profile and the EEDF is demonstrated. Effects of plasma non-uniformity on electron heating in rf electric field have also been studied. An enhancement of the electron heating due to the bounce resonance between the electron bounce motion and the rf electric field has been observed. Additional information on the subject is posted in http://www.pppl.gov/pub_report/2003/PPPL-3814-abs.html and in http://arxiv.org/abs/physics/0211009

  11. Space Generic Open Avionics Architecture (SGOAA): Overview

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1992-01-01

    A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.

  12. Expression of the 1-SST and 1-FFT genes and consequent fructan accumulation in Agave tequilana and A. inaequidens is differentially induced by diverse (a)biotic-stress related elicitors.

    PubMed

    Suárez-González, Edgar Martín; López, Mercedes G; Délano-Frier, John P; Gómez-Leyva, Juan Florencio

    2014-02-15

    The expression of genes coding for sucrose:sucrose 1-fructosyltransferase (1-SST; EC 2.4.1.99) and fructan:fructan 1-fructosyltransferase (1-FFT; EC 2.4.1.100), both fructan biosynthesizing enzymes, characterization by TLC and HPAEC-PAD, as well as the quantification of the fructo-oligosaccharides (FOS) accumulating in response to the exogenous application of sucrose, kinetin (cytokinin) or other plant hormones associated with (a)biotic stress responses were determined in two Agave species grown in vitro, domesticated Agave tequilana var. azul and wild A. inaequidens. It was found that elicitors such as salicylic acid (SA), and jasmonic acid methyl ester (MeJA) had the strongest effect on fructo-oligosaccharide (FOS) accumulation. The exogenous application of 1mM SA induced a 36-fold accumulation of FOS of various degrees of polymerization (DP) in stems of A. tequilana. Other treatments, such as 50mM abscisic acid (ABA), 8% Sucrose (Suc), and 1.0 mg L(-1) kinetin (KIN) also led to a significant accumulation of low and high DP FOS in this species. Conversely, treatment with 200 μM MeJA, which was toxic to A. tequilana, induced an 85-fold accumulation of FOS in the stems of A. inaequidens. Significant FOS accumulation in this species also occurred in response to treatments with 1mM SA, 8% Suc, and 10% polyethylene glycol (PEG). Maximum yields of 13.6 and 8.9 mg FOS per g FW were obtained in stems of A. tequilana and A. inaequidens, respectively. FOS accumulation in the above treatments was tightly associated with increased expression levels of either the 1-FFT or the 1-SST gene in tissues of both Agave species. PMID:23988562

  13. Expression of the 1-SST and 1-FFT genes and consequent fructan accumulation in Agave tequilana and A. inaequidens is differentially induced by diverse (a)biotic-stress related elicitors.

    PubMed

    Suárez-González, Edgar Martín; López, Mercedes G; Délano-Frier, John P; Gómez-Leyva, Juan Florencio

    2014-02-15

    The expression of genes coding for sucrose:sucrose 1-fructosyltransferase (1-SST; EC 2.4.1.99) and fructan:fructan 1-fructosyltransferase (1-FFT; EC 2.4.1.100), both fructan biosynthesizing enzymes, characterization by TLC and HPAEC-PAD, as well as the quantification of the fructo-oligosaccharides (FOS) accumulating in response to the exogenous application of sucrose, kinetin (cytokinin) or other plant hormones associated with (a)biotic stress responses were determined in two Agave species grown in vitro, domesticated Agave tequilana var. azul and wild A. inaequidens. It was found that elicitors such as salicylic acid (SA), and jasmonic acid methyl ester (MeJA) had the strongest effect on fructo-oligosaccharide (FOS) accumulation. The exogenous application of 1mM SA induced a 36-fold accumulation of FOS of various degrees of polymerization (DP) in stems of A. tequilana. Other treatments, such as 50mM abscisic acid (ABA), 8% Sucrose (Suc), and 1.0 mg L(-1) kinetin (KIN) also led to a significant accumulation of low and high DP FOS in this species. Conversely, treatment with 200 μM MeJA, which was toxic to A. tequilana, induced an 85-fold accumulation of FOS in the stems of A. inaequidens. Significant FOS accumulation in this species also occurred in response to treatments with 1mM SA, 8% Suc, and 10% polyethylene glycol (PEG). Maximum yields of 13.6 and 8.9 mg FOS per g FW were obtained in stems of A. tequilana and A. inaequidens, respectively. FOS accumulation in the above treatments was tightly associated with increased expression levels of either the 1-FFT or the 1-SST gene in tissues of both Agave species.

  14. Pancreatic islet plasticity: Interspecies comparison of islet architecture and composition

    PubMed Central

    Steiner, Donald J.; Kim, Abraham; Miller, Kevin; Hara, Manami

    2010-01-01

    The pancreatic islet displays diverse patterns of endocrine cell arrangement. The prototypic islet, with insulin-secreting β-cells forming the core surrounded by other endocrine cells in the periphery, is largely based on studies of normal rodent islets. Recent reports on large animals, including humans, show a difference in islet architecture, in which the endocrine cells are randomly distributed throughout the islet. This particular species difference has raised concerns regarding the interpretation of data based on rodent studies to humans. On the other hand, further variations have been reported in marsupials and some nonhuman primates, which possess an inverted ratio of β-cells to other endocrine cells. This review discusses the striking plasticity of islet architecture and cellular composition among various species including changes in response to metabolic states within a single species. We propose that this plasticity reflects evolutionary acquired adaptation induced by altered physiological conditions, rather than inherent disparities between species. PMID:20657742

  15. Recurrent cerebellar architecture solves the motor-error problem.

    PubMed Central

    Porrill, John; Dean, Paul; Stone, James V.

    2004-01-01

    Current views of cerebellar function have been heavily influenced by the models of Marr and Albus, who suggested that the climbing fibre input to the cerebellum acts as a teaching signal for motor learning. It is commonly assumed that this teaching signal must be motor error (the difference between actual and correct motor command), but this approach requires complex neural structures to estimate unobservable motor error from its observed sensory consequences. We have proposed elsewhere a recurrent decorrelation control architecture in which Marr-Albus models learn without requiring motor error. Here, we prove convergence for this architecture and demonstrate important advantages for the modular control of systems with multiple degrees of freedom. These results are illustrated by modelling adaptive plant compensation for the three-dimensional vestibular ocular reflex. This provides a functional role for recurrent cerebellar connectivity, which may be a generic anatomical feature of projections between regions of cerebral and cerebellar cortex. PMID:15255096

  16. Software Architecture of Sensor Data Distribution In Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo

    2006-01-01

    Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.

  17. Neural network architecture for solving the algebraic matrix Riccati equation

    NASA Astrophysics Data System (ADS)

    Ham, Fredric M.; Collins, Emmanuel G.

    1996-03-01

    This paper presents a neurocomputing approach for solving the algebraic matrix Riccati equation. This approach is able to utilize a good initial condition to reduce the computation time in comparison to standard methods for solving the Riccati equation. The repeated solutions of closely related Riccati equations appears in homotopy algorithms to solve certain problems in fixed-architecture control. Hence, the new approach has the potential to significantly speed-up these algorithms. It also has potential applications in adaptive control. The structured neural network architecture is trained using error backpropagation based on a steepest-descent learning rule. An example is given which illustrates the advantage of utilizing a good initial condition (i.e., initial setting of the neural network synaptic weight matrix) in the structured neural network.

  18. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  19. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  20. Architectural Environment: A Resource Kit.

    ERIC Educational Resources Information Center

    J.B. Speed Art Museum, Louisville, KY.

    There are many ways to approach the investigation of architecture. One can look at structural form, climate and topography, the aesthetics of style and decoration, building function, historical factors, cultural meanings, or technology and techniques associated with construction. This resource kit touches upon a few of these approaches, ranging…