Science.gov

Sample records for adaptive fft architecture

  1. A High-Throughput, Adaptive FFT Architecture for FPGA-Based Space-Borne Data Processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Kayla; Zheng, Jason; He, Yutao; Shah, Biren

    2010-01-01

    Historically, computationally-intensive data processing for space-borne instruments has heavily relied on ground-based computing resources. But with recent advances in functional densities of Field-Programmable Gate-Arrays (FPGAs), there has been an increasing desire to shift more processing on-board; therefore relaxing the downlink data bandwidth requirements. Fast Fourier Transforms (FFTs) are commonly used building blocks for data processing applications, with a growing need to increase the FFT block size. Many existing FFT architectures have mainly emphasized on low power consumption or resource usage; but as the block size of the FFT grows, the throughput is often compromised first. In addition to power and resource constraints, space-borne digital systems are also limited to a small set of space-qualified memory elements, which typically lag behind the commercially available counterparts in capacity and bandwidth. The bandwidth limitation of the external memory creates a bottleneck for a large, high-throughput FFT design with large block size. In this paper, we present the Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture for a moderately large block size (32K) with considerations to power consumption and resource usage, as well as throughput. We will also show that the architecture can be easily adapted for different FFT block sizes with different throughput and power requirements. The result is completely contained within an FPGA without relying on external memories. Implementation results are summarized.

  2. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the

  3. FFT Computation with Systolic Arrays, A New Architecture

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.

  4. A low-cost PSoC architecture for long FFT

    NASA Astrophysics Data System (ADS)

    Lomoio, Pietro Angelo; Corsonello, Pasquale

    2013-05-01

    A system-level implementation of FFT architecture for long data series is presented. It exploits opportunities provided by the newest Programmable System-on-Chips (PSoC) to perform such intensive algorithms. The proposed strategy relies on a balanced partitioning of computational e ort between an embedded ARM processor and an on-purpose designed FFT module based on a Radix-2 algorithm. The external memories are used to accommodate the large amount of complex data and twiddle coefficients. The embedded controller is purposely programmed to allow the high-level management of the algorithm and the correct flow of data among peripherals, without need of extra control logic. The proposed architecture can be easily reconfigured, in order to change input data length. When implemented using a Microsemi A2F500 SmartFusion FPGA chip, it consumes approximately 61% of available logic resources to compute a 65536-point FFT.

  5. A Study on Adapting the Zoom FFT Algorithm to Automotive Millimetre Wave Radar

    NASA Astrophysics Data System (ADS)

    Kuroda, Hiroshi; Takano, Kazuaki

    The millimetre wave radar has been developed for automotive application such as ACC (Adaptive Cruise Control) and CWS (Collision Warning System). The radar uses MMIC (Monolithic Microwave Integrated Circuits) devices for transmitting and receiving 76 GHz millimetre wave signals. The radar is FSK (Frequency Shift Keying) monopulse type. The radar transmits 2 frequencies in time-duplex manner, and measures distance and relative speed of targets. The monopulse feature detects the azimuth angle of targets without a scanning mechanism. The Zoom FFT (Fast Fourier Transform) algorithm, which analyses frequency domain precisely, has adapted to the radar for discriminating multiple stationary targets. The Zoom FFT algorithm is evaluated in test truck. The evaluation results show good performance on discriminating two stationary vehicles in host lane and adjacent lane.

  6. Implementation of Joint Pre-FFT Adaptive Array Antenna and Post-FFT Space Diversity Combining for Mobile ISDB-T Receiver

    NASA Astrophysics Data System (ADS)

    Pham, Dang Hai; Gao, Jing; Tabata, Takanobu; Asato, Hirokazu; Hori, Satoshi; Wada, Tomohisha

    In our application targeted here, four on-glass antenna elements are set in an automobile to improve the reception quality of mobile ISDB-T receiver. With regard to the directional characteristics of each antenna, we propose and implement a joint Pre-FFT adaptive array antenna and Post-FFT space diversity combining (AAA-SDC) scheme for mobile ISDB-T receiver. By applying a joint hardware and software approach, a flexible platform is realized in which several system configuration schemes can be supported; the receiver can be reconfigured on the fly. Simulation results show that the AAA-SDC scheme drastically improves the performance of mobile ISDB-T receiver, especially in the region of large Doppler shift. The experimental results from a field test also confirm that the proposed AAA-SDC scheme successfully achieves an outstanding reception rate up to 100% while moving at the speed of 80km/h.

  7. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  8. High-resolution optical coherence tomography using self-adaptive FFT and array detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yonghua; Chen, Zhongping; Xiang, Shaohua; Ding, Zhihua; Ren, Hongwu; Nelson, J. Stuart; Ranka, Jinendra K.; Windeler, Robert S.; Stentz, Andrew J.

    2001-05-01

    We developed a novel optical coherence tomographic (OCT) system which utilized broadband continuum generation for high axial resolution and a high numeric-aperture (N.A.) Objective for high lateral resolution (<5 micrometers ). The optimal focusing point was dynamically compensated during axial scanning so that it can be kept at the same position as the point that has an equal optical path length as that in the reference arm. This gives us uniform focusing size (<5 mum) at different depths. A new self-adaptive fast Fourier transform (FFT) algorithm was developed to digitally demodulate the interference fringes. The system employed a four-channel detector array for speckle reduction that significantly improved the image's signal-to-noise ratio.

  9. Architecture for Adaptive Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Hayes-Roth, Barbara

    1993-01-01

    We identify a class of niches to be occupied by 'adaptive intelligent systems (AISs)'. In contrast with niches occupied by typical AI agents, AIS niches present situations that vary dynamically along several key dimensions: different combinations of required tasks, different configurations of available resources, contextual conditions ranging from benign to stressful, and different performance criteria. We present a small class hierarchy of AIS niches that exhibit these dimensions of variability and describe a particular AIS niche, ICU (intensive care unit) patient monitoring, which we use for illustration throughout the paper. We have designed and implemented an agent architecture that supports all of different kinds of adaptation by exploiting a single underlying theoretical concept: An agent dynamically constructs explicit control plans to guide its choices among situation-triggered behaviors. We illustrate the architecture and its support for adaptation with examples from Guardian, an experimental agent for ICU monitoring.

  10. Fast adaptive composite grid methods on distributed parallel architectures

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Quinlan, Daniel

    1992-01-01

    The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.

  11. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  12. The genetic architecture of climatic adaptation of tropical cattle.

    PubMed

    Porto-Neto, Laercio R; Reverter, Antonio; Prayaga, Kishore C; Chan, Eva K F; Johnston, David J; Hawken, Rachel J; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S; Bolormaa, Sunduimijid; Goddard, Michael E; Burrow, Heather M; Henshall, John M; Lehnert, Sigrid A; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity. PMID:25419663

  13. The Genetic Architecture of Climatic Adaptation of Tropical Cattle

    PubMed Central

    Porto-Neto, Laercio R.; Reverter, Antonio; Prayaga, Kishore C.; Chan, Eva K. F.; Johnston, David J.; Hawken, Rachel J.; Fordyce, Geoffry; Garcia, Jose Fernando; Sonstegard, Tad S.; Bolormaa, Sunduimijid; Goddard, Michael E.; Burrow, Heather M.; Henshall, John M.; Lehnert, Sigrid A.; Barendse, William

    2014-01-01

    Adaptation of global food systems to climate change is essential to feed the world. Tropical cattle production, a mainstay of profitability for farmers in the developing world, is dominated by heat, lack of water, poor quality feedstuffs, parasites, and tropical diseases. In these systems European cattle suffer significant stock loss, and the cross breeding of taurine x indicine cattle is unpredictable due to the dilution of adaptation to heat and tropical diseases. We explored the genetic architecture of ten traits of tropical cattle production using genome wide association studies of 4,662 animals varying from 0% to 100% indicine. We show that nine of the ten have genetic architectures that include genes of major effect, and in one case, a single location that accounted for more than 71% of the genetic variation. One genetic region in particular had effects on parasite resistance, yearling weight, body condition score, coat colour and penile sheath score. This region, extending 20 Mb on BTA5, appeared to be under genetic selection possibly through maintenance of haplotypes by breeders. We found that the amount of genetic variation and the genetic correlations between traits did not depend upon the degree of indicine content in the animals. Climate change is expected to expand some conditions of the tropics to more temperate environments, which may impact negatively on global livestock health and production. Our results point to several important genes that have large effects on adaptation that could be introduced into more temperate cattle without detrimental effects on productivity. PMID:25419663

  14. L1 adaptive output-feedback control architectures

    NASA Astrophysics Data System (ADS)

    Kharisov, Evgeny

    This research focuses on development of L 1 adaptive output-feedback control. The objective is to extend the L1 adaptive control framework to a wider class of systems, as well as obtain architectures that afford more straightforward tuning. We start by considering an existing L1 adaptive output-feedback controller for non-strictly positive real systems based on piecewise constant adaptation law. It is shown that L 1 adaptive control architectures achieve decoupling of adaptation from control, which leads to bounded away from zero time-delay and gain margins in the presence of arbitrarily fast adaptation. Computed performance bounds provide quantifiable performance guarantees both for system output and control signal in transient and steady state. A noticeable feature of the L1 adaptive controller is that its output behavior can be made close to the behavior of a linear time-invariant system. In particular, proper design of the lowpass filter can achieve output response, which almost scales for different step reference commands. This property is relevant to applications with human operator in the loop (for example: control augmentation systems of piloted aircraft), since predictability of the system response is necessary for adequate performance of the operator. Next we present applications of the L1 adaptive output-feedback controller in two different fields of engineering: feedback control of human anesthesia, and ascent control of a NASA crew launch vehicle (CLV). The purpose of the feedback controller for anesthesia is to ensure that the patient's level of sedation during surgery follows a prespecified profile. The L1 controller is enabled by anesthesiologist after he/she achieves sufficient patient sedation level by introducing sedatives manually. This problem formulation requires safe switching mechanism, which avoids controller initialization transients. For this purpose, we used an L1 adaptive controller with special output predictor initialization routine

  15. Study on architecture and implementation of adaptive spatial information service

    NASA Astrophysics Data System (ADS)

    Yu, Zhuoyuan; Wang, Yingjie; Luo, Bin

    2007-06-01

    More and more geo-spatial information has been disseminated to the Internet based on WebGIS architecture. Some of these online mapping applications have already been widely used in recent years, such as Google map, MapQuest, go2map, mapbar. However, due to the limitation of web map technology and transmit speed of large geo-spatial data through the Internet, most of these web map systems employ (pyramid-indexed) raster map modeling technology. This method can shorten server's response time but largely reduces the flexibility and visualization effect of the web map provided. It will be difficult for them to adaptively change the map contents or map styles for variant user demands. This paper propose a new system architecture for adaptive web map service by integrating latest network technology and web map technology, such as SVG, Ajax, user modeling. Its main advantages include: Firstly, it is user customized. In this proposed map system, user can design the map contents, styles and interfaces online by themselves; secondly, it is more intelligent. It can record user interactive actions with the system, analyze user profiles, predict user behavior. User's interests will be obtained and tasks will be suggested based on different user models, which are generated from the system. For instance, if a new user login in, the nearest user model will be matched and some interactive suggestions will be provided by the system for the user. It is a more powerful and efficient way for spatial information sharing. This paper first discusses the main system architecture of adaptive spatial information service which consists of three parts: user layer, map application layer and database layer. User layer is distributed on client side which includes Web map (SVG) browser, map renderer and map visualization component. Application layer includes map application server, user interface generation, user analysis and user modeling, etc. Based on user models, map content, style and user

  16. Adaptive resource allocation architecture applied to line tracking

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Pace, Donald W.

    2000-04-01

    Recent research has demonstrated the benefits of a multiple hypothesis, multiple model sonar line tracking solution, achieved at significant computational cost. We have developed an adaptive architecture that trades computational resources for algorithm complexity based on environmental conditions. A Fuzzy Logic Rule-Based approach is applied to adaptively assign algorithmic resources to meet system requirements. The resources allocated by the Fuzzy Logic algorithm include (1) the number of hypotheses permitted (yielding multi-hypothesis and single-hypothesis modes), (2) the number of signal models to use (yielding an interacting multiple model capability), (3) a new track likelihood for hypothesis generation, (4) track attribute evaluator activation (for signal to noise ratio, frequency bandwidth, and others), and (5) adaptive cluster threshold control. Algorithm allocation is driven by a comparison of current throughput rates to a desired real time rate. The Fuzzy Logic Controlled (FLC) line tracker, a single hypothesis line tracker, and a multiple hypothesis line tracker are compared on real sonar data. System resource usage results demonstrate the utility of the FLC line tracker.

  17. An Evolutionarily Adaptive Neural Architecture for Social Reasoning

    PubMed Central

    Barbey, Aron K.; Krueger, Frank; Grafman, Jordan

    2009-01-01

    Recent progress in cognitive neuroscience highlights the involvement of the prefrontal cortex (PFC) in social cognition. Accumulating evidence demonstrates that representations within the lateral PFC enable people to coordinate their thoughts and actions with their intentions to support goal-directed social behavior. Despite the importance of this region in guiding social interactions, remarkably little is known about the functional organization and forms of social inference processed by the lateral PFC. Here we introduce a cognitive neuroscience framework for understanding the inferential architecture of the lateral PFC, drawing upon recent theoretical developments in evolutionary psychology and emerging neuroscience evidence about how this region may orchestrate behavior on the basis of evolutionarily adaptive social norms for obligatory, prohibited, and permissible courses of action. PMID:19782410

  18. Context adaptive binary arithmetic decoding on transport triggered architectures

    NASA Astrophysics Data System (ADS)

    Rouvinen, Joona; Jääskeläinen, Pekka; Rintaluoma, Tero; Silvén, Olli; Takala, Jarmo

    2008-02-01

    Video coding standards, such as MPEG-4, H.264, and VC1, define hybrid transform based block motion compensated techniques that employ almost the same coding tools. This observation has been a foundation for defining the MPEG Reconfigurable Multimedia Coding framework that targets to facilitate multi-format codec design. The idea is to send a description of the codec with the bit stream, and to reconfigure the coding tools accordingly on-the-fly. This kind of approach favors software solutions, and is a substantial challenge for the implementers of mobile multimedia devices that aim at high energy efficiency. In particularly as high definition formats are about to be required from mobile multimedia devices, variable length decoders are becoming a serious bottleneck. Even at current moderate mobile video bitrates software based variable length decoders swallow a major portion of the resources of a mobile processor. In this paper we present a Transport Triggered Architecture (TTA) based programmable implementation for Context Adaptive Binary Arithmetic de-Coding (CABAC) that is used e.g. in the main profile of H.264 and in JPEG2000. The solution can be used even for other variable length codes.

  19. A reconfigurable ASIP for high-throughput and flexible FFT processing in SDR environment

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao

    2014-04-01

    This paper presents a high-throughput and reconfigurable processor for fast Fourier transformation (FFT) processing based on SDR methodology. It adopts application specific instruction-set (ASIP) and single instruction multiple data (SIMD) architecture to exploit the parallelism of butterfly operations in FFT algorithm. Moreover, a novel 3-dimension multi-bank memory is proposed for parallel conflict-free accesses. The overall throughput and power-efficiency are greatly enhanced by parallel and streamline processing. A test chip supporting 64~2048-point FFT is setup for experiment. Logic synthesis reveals a maximum clock frequency of 500MHz and an area of 0.49 mm2 for the processor's logic using a low power 45-nm technology, and the dynamic power estimation is about 96.6mW. Compared with previous works, our FFT ASIP achieves a higher energy-efficiency with relative low area cost.

  20. FFT and cone-beam CT reconstruction on graphics hardware

    NASA Astrophysics Data System (ADS)

    Després, Philippe; Sun, Mingshan; Hasegawa, Bruce H.; Prevrhal, Sven

    2007-03-01

    Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines. Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented on the GPU and its performance compared to a CPU version. Different reconstruction strategies were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to 30, allowing 256 3 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical imaging.

  1. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  2. Efficient Two-Dimensional-FFT Program

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1992-01-01

    Program computes 64 X 64-point fast Fourier transform in less than 17 microseconds. Optimized 64 X 64 Point Two-Dimensional Fast Fourier Transform combines performance of real- and complex-valued one-dimensional fast Fourier transforms (FFT's) to execute two-dimensional FFT and coefficients of power spectrum. Coefficients used in many applications, including analyzing spectra, convolution, digital filtering, processing images, and compressing data. Source code written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly languages.

  3. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions.

    PubMed

    Sandhu, Nitika; Raman, K Anitha; Torres, Rolando O; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-08-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity. PMID:27342311

  4. Architecture and performance of astronomical adaptive optics systems

    NASA Technical Reports Server (NTRS)

    Bloemhof, E.

    2002-01-01

    In recent years the technological advances of adaptive optics have enabled a great deal of innovative science. In this lecture I review the system-level design of modern astronomical AO instruments, and discuss their current capabilities.

  5. Trajectory Optimization with Adaptive Deployable Entry and Placement Technology Architecture

    NASA Astrophysics Data System (ADS)

    Saranathan, H.; Saikia, S.; Grant, M. J.; Longuski, J. M.

    2014-06-01

    This paper compares the results of trajectory optimization for Adaptive Deployable Entry and Placement Technology (ADEPT) using different control methods. ADEPT addresses the limitations of current EDL technology in delivering heavy payloads to Mars.

  6. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  7. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  8. An integrated architecture of adaptive neural network control for dynamic systems

    SciTech Connect

    Ke, Liu; Tokar, R.; Mcvey, B.

    1994-07-01

    In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.

  9. Adaptive kinetic-fluid solvers for heterogeneous computing architectures

    NASA Astrophysics Data System (ADS)

    Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir

    2015-12-01

    We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.

  10. The genetic architecture of climatic adaptation in tropical cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adaptation of global food systems to climate change is essential to feed the world in the future. Tropical cattle production, an important mainstay of profitability for farmers in the developing world, is dominated by conditions of heat, lack of water, poor quality feedstuffs, parasites, and tropica...

  11. CZT vs FFT: Flexibility vs Speed

    SciTech Connect

    S. Sirin

    2003-10-01

    Bluestein's Fast Fourier Transform (FFT), commonly called the Chirp-Z Transform (CZT), is a little-known algorithm that offers engineers a high-resolution FFT combined with the ability to specify bandwidth. In the field of digital signal processing, engineers are always challenged to detect tones, frequencies, signatures, or some telltale sign that signifies a condition that must be indicated, ignored, or controlled. One of these challenges is to detect specific frequencies, for instance when looking for tones from telephones or detecting 60-Hz noise on power lines. The Goertzel algorithm described in Embedded Systems Programming, September 2002, offered a powerful tool toward finding specific frequencies faster than the FFT.Another challenge involves analyzing a range of frequencies, such as recording frequency response measurements, matching voice patterns, or displaying spectrum information on the face of an amateur radio. To meet this challenge most engineers use the well-known FFT. The CZT gives the engineer the flexibility to specify bandwidth and outputs real and imaginary frequency components from which the magnitude and phase can be computed. A description of the CZT and a discussion of the advantages and disadvantages of CZT versus the FFT and Goertzel algorithms will be followed by situations in which the CZT would shine. The reader will find that the CZT is very useful but that flexibility has a price.

  12. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  13. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  14. The Genetic Architecture of Adaptations to High Altitude in Ethiopia

    PubMed Central

    Alkorta-Aranburu, Gorka; Beall, Cynthia M.; Witonsky, David B.; Gebremedhin, Amha; Pritchard, Jonathan K.; Di Rienzo, Anna

    2012-01-01

    Although hypoxia is a major stress on physiological processes, several human populations have survived for millennia at high altitudes, suggesting that they have adapted to hypoxic conditions. This hypothesis was recently corroborated by studies of Tibetan highlanders, which showed that polymorphisms in candidate genes show signatures of natural selection as well as well-replicated association signals for variation in hemoglobin levels. We extended genomic analysis to two Ethiopian ethnic groups: Amhara and Oromo. For each ethnic group, we sampled low and high altitude residents, thus allowing genetic and phenotypic comparisons across altitudes and across ethnic groups. Genome-wide SNP genotype data were collected in these samples by using Illumina arrays. We find that variants associated with hemoglobin variation among Tibetans or other variants at the same loci do not influence the trait in Ethiopians. However, in the Amhara, SNP rs10803083 is associated with hemoglobin levels at genome-wide levels of significance. No significant genotype association was observed for oxygen saturation levels in either ethnic group. Approaches based on allele frequency divergence did not detect outliers in candidate hypoxia genes, but the most differentiated variants between high- and lowlanders have a clear role in pathogen defense. Interestingly, a significant excess of allele frequency divergence was consistently detected for genes involved in cell cycle control and DNA damage and repair, thus pointing to new pathways for high altitude adaptations. Finally, a comparison of CpG methylation levels between high- and lowlanders found several significant signals at individual genes in the Oromo. PMID:23236293

  15. Adaptive changes in the kinetochore architecture facilitate proper spindle assembly

    PubMed Central

    Magidson, Valentin; Paul, Raja; Yang, Nachen; Ault, Jeffrey G.; O’Connell, Christopher B.; Tikhonenko, Irina; McEwen, Bruce F.; Mogilner, Alex; Khodjakov, Alexey

    2015-01-01

    Mitotic spindle formation relies on the stochastic capture of microtubules at kinetochores. Kinetochore architecture affects the efficiency and fidelity of this process with large kinetochores expected to accelerate assembly at the expense of accuracy, and smaller kinetochores to suppress errors at the expense of efficiency. We demonstrate that upon mitotic entry, kinetochores in cultured human cells form large crescents that subsequently compact into discrete structures on opposite sides of the centromere. This compaction occurs only after the formation of end-on microtubule attachments. Live-cell microscopy reveals that centromere rotation mediated by lateral kinetochore-microtubule interactions precedes formation of end-on attachments and kinetochore compaction. Computational analyses of kinetochore expansion-compaction in the context of lateral interactions correctly predict experimentally-observed spindle assembly times with reasonable error rates. The computational model suggests that larger kinetochores reduce both errors and assembly times, which can explain the robustness of spindle assembly and the functional significance of enlarged kinetochores. PMID:26258631

  16. Dimensions of Usability: Cougaar, Aglets and Adaptive Agent Architecture (AAA)

    SciTech Connect

    Haack, Jereme N.; Cowell, Andrew J.; Gorton, Ian

    2004-06-20

    Research and development organizations are constantly evaluating new technologies in order to implement the next generation of advanced applications. At Pacific Northwest National Laboratory, agent technologies are perceived as an approach that can provide a competitive advantage in the construction of highly sophisticated software systems in a range of application areas. An important factor in selecting a successful agent architecture is the level of support it provides the developer in respect to developer support, examples of use, integration into current workflow and community support. Without such assistance, the developer must invest more effort into learning instead of applying the technology. Like many other applied research organizations, our staff are not dedicated to a single project and must acquire new skills as required, underlining the importance of being able to quickly become proficient. A project was instigated to evaluate three candidate agent toolkits across the dimensions of support they provide. This paper reports on the outcomes of this evaluation and provides insights into the agent technologies evaluated.

  17. Adaptation of pancreatic islet cyto-architecture during development

    NASA Astrophysics Data System (ADS)

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2016-04-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1–35 weeks and 12–24 months.

  18. Adaptation of pancreatic islet cyto-architecture during development.

    PubMed

    Striegel, Deborah A; Hara, Manami; Periwal, Vipul

    2016-01-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1-35 weeks and 12-24 months. PMID:27063927

  19. An optimized, universal hardware-based adaptive correlation receiver architecture

    NASA Astrophysics Data System (ADS)

    Zhu, Zaidi; Suarez, Hernan; Zhang, Yan; Wang, Shang

    2014-05-01

    The traditional radar RF transceivers, similar to communication transceivers, have the basic elements such as baseband waveform processing, IF/RF up-down conversion, transmitter power circuits, receiver front-ends, and antennas, which are shown in the upper half of Figure 1. For modern radars with diversified and sophisticated waveforms, we can frequently observe that the transceiver behaviors, especially nonlinear behaviors, are depending on the waveform amplitudes, frequency contents and instantaneous phases. Usually, it is a troublesome process to tune an RF transceiver to optimum when different waveforms are used. Another issue arises from the interference caused by the waveforms - for example, the range side-lobe (RSL) caused by the waveforms, once the signals pass through the entire transceiver chain, may be further increased due to distortions. This study is inspired by the two existing solutions from commercial communication industry, digital pre-distortion (DPD) and adaptive channel estimation and Interference Mitigation (AIM), while combining these technologies into a single chip or board that can be inserted into the existing transceiver system. This device is then named RF Transceiver Optimizer (RTO). The lower half of Figure 1 shows the basic element of RTO. With RTO, the digital baseband processing does not need to take into account the transceiver performance with diversified waveforms, such as the transmitter efficiency and chain distortion (and the intermodulation products caused by distortions). Neither does it need to concern the pulse compression (or correlation receiver) process and the related mitigation. The focus is simply the information about the ground truth carried by the main peak of correlation receiver outputs. RTO can be considered as an extension of the existing calibration process, while it has the benefits of automatic, adaptive and universal. Currently, the main techniques to implement the RTO are the digital pre- or -post

  20. A hybrid behavioural rule of adaptation and drift explains the emergent architecture of antagonistic networks

    PubMed Central

    Nuwagaba, S.; Zhang, F.; Hui, C.

    2015-01-01

    Ecological processes that can realistically account for network architectures are central to our understanding of how species assemble and function in ecosystems. Consumer species are constantly selecting and adjusting which resource species are to be exploited in an antagonistic network. Here we incorporate a hybrid behavioural rule of adaptive interaction switching and random drift into a bipartite network model. Predictions are insensitive to the model parameters and the initial network structures, and agree extremely well with the observed levels of modularity, nestedness and node-degree distributions for 61 real networks. Evolutionary and community assemblage histories only indirectly affect network structure by defining the size and complexity of ecological networks, whereas adaptive interaction switching and random drift carve out the details of network architecture at the faster ecological time scale. The hybrid behavioural rule of both adaptation and drift could well be the key processes for structure emergence in real ecological networks. PMID:25925104

  1. A hybrid behavioural rule of adaptation and drift explains the emergent architecture of antagonistic networks.

    PubMed

    Nuwagaba, S; Zhang, F; Hui, C

    2015-05-22

    Ecological processes that can realistically account for network architectures are central to our understanding of how species assemble and function in ecosystems. Consumer species are constantly selecting and adjusting which resource species are to be exploited in an antagonistic network. Here we incorporate a hybrid behavioural rule of adaptive interaction switching and random drift into a bipartite network model. Predictions are insensitive to the model parameters and the initial network structures, and agree extremely well with the observed levels of modularity, nestedness and node-degree distributions for 61 real networks. Evolutionary and community assemblage histories only indirectly affect network structure by defining the size and complexity of ecological networks, whereas adaptive interaction switching and random drift carve out the details of network architecture at the faster ecological time scale. The hybrid behavioural rule of both adaptation and drift could well be the key processes for structure emergence in real ecological networks. PMID:25925104

  2. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication. PMID:25980858

  3. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    NASA Astrophysics Data System (ADS)

    Kuo, Jen-Chih; Wen, Ching-Hua; Lin, Chih-Hsiu; Wu, An-Yeu (Andy)

    2003-12-01

    The technique of orthogonal frequency division multiplexing (OFDM) is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the fast Fourier transform (FFT) and inverse FFT (IFFT) operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly processing element (PE) based on the coordinate rotation digital computer (CORDIC) algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC[InlineEquation not available: see fulltext.]m 1P4M CMOS technology. The simulations results show that the chip can perform ([InlineEquation not available: see fulltext.]-[InlineEquation not available: see fulltext.])-point FFT/IFFT operations up to[InlineEquation not available: see fulltext.] operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL ([InlineEquation not available: see fulltext.]), DAB, and[InlineEquation not available: see fulltext.]-mode DVB.

  4. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies

    PubMed Central

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-01-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima. PMID:25806542

  5. Conservatism and novelty in the genetic architecture of adaptation in Heliconius butterflies.

    PubMed

    Huber, B; Whibley, A; Poul, Y L; Navarro, N; Martin, A; Baxter, S; Shah, A; Gilles, B; Wirth, T; McMillan, W O; Joron, M

    2015-05-01

    Understanding the genetic architecture of adaptive traits has been at the centre of modern evolutionary biology since Fisher; however, evaluating how the genetic architecture of ecologically important traits influences their diversification has been hampered by the scarcity of empirical data. Now, high-throughput genomics facilitates the detailed exploration of variation in the genome-to-phenotype map among closely related taxa. Here, we investigate the evolution of wing pattern diversity in Heliconius, a clade of neotropical butterflies that have undergone an adaptive radiation for wing-pattern mimicry and are influenced by distinct selection regimes. Using crosses between natural wing-pattern variants, we used genome-wide restriction site-associated DNA (RAD) genotyping, traditional linkage mapping and multivariate image analysis to study the evolution of the architecture of adaptive variation in two closely related species: Heliconius hecale and H. ismenius. We implemented a new morphometric procedure for the analysis of whole-wing pattern variation, which allows visualising spatial heatmaps of genotype-to-phenotype association for each quantitative trait locus separately. We used the H. melpomene reference genome to fine-map variation for each major wing-patterning region uncovered, evaluated the role of candidate genes and compared genetic architectures across the genus. Our results show that, although the loci responding to mimicry selection are highly conserved between species, their effect size and phenotypic action vary throughout the clade. Multilocus architecture is ancestral and maintained across species under directional selection, whereas the single-locus (supergene) inheritance controlling polymorphism in H. numata appears to have evolved only once. Nevertheless, the conservatism in the wing-patterning toolkit found throughout the genus does not appear to constrain phenotypic evolution towards local adaptive optima. PMID:25806542

  6. A CORDIC based FFT processor for MIMO channel emulator

    NASA Astrophysics Data System (ADS)

    Xiong, Yanwei; Zhang, Jianhua; Zhang, Ping

    2013-03-01

    With the advent of Multi Input Multi Output (MIMO) systems, the system performance is highly dependent on the accurate representation of the channel condition that causes the wireless channel emulation to become increasingly important. The conventional Finite Impulse Response (FIR) based emulator has a high real-time but the complexity rapidly becomes impractical for larger array sizes. However, the frequency domain approach can avoid this problem and reduce the complexity for higher order arrays. The complexity comparison between in time domain and in frequency domain is made in this paper. The Fast Fourier Transform (FFT) as an important component of signal processing in frequency domain is briefly introduced and an FGPA system architecture based on CORDIC algorithm is proposed. The full design is implemented in Xilinx's Virtex-5.

  7. Genetic architecture of adaptation to novel environmental conditions in a predominantly selfing allopolyploid plant.

    PubMed

    Volis, S; Ormanbekova, D; Yermekbayev, K; Abugalieva, S; Turuspekov, Y; Shulgina, I

    2016-06-01

    Genetic architecture of adaptation is traditionally studied in the context of local adaptation, viz. spatially varying conditions experienced by the species. However, anthropogenic changes in the natural environment pose a new context to this issue, that is, adaptation to an environment that is new for the species. In this study, we used crossbreeding to analyze genetic architecture of adaptation to conditions not currently experienced by the species but with high probability of encounter in the near future due to global climate change. We performed targeted interpopulation crossing using genotypes from two core and two peripheral Triticum dicoccoides populations and raised the parents and three generations of hybrids in a greenhouse under simulated desert conditions to analyze the genetic architecture of adaptation to these conditions and an effect of gene flow from plants having different origin. The hybrid (F1) fitness did not differ from that of the parents in crosses where both plants originated from the species core, but in crosses involving one parent from the species core and another one from the species periphery the fitness of F1 was consistently higher than that of the periphery-originated parent. Plant fitness in the next two generations (F2 and F3) did not differ from the F1, suggesting that effects of epistatic interactions between recombining and segregating alleles of genes contributing to fitness were minor or absent. The observed low importance of epistatic gene interactions in allopolyploid T. dicoccoides and low probability of hybrid breakdown appear to be the result of permanent fixation of heterozygosity and lack of intergenomic recombination in this species. At the same time, predominant but not complete selfing combined with an advantage of bivalent pairing of homologous chromosomes appears to maintain high genetic variability in T. dicoccoides, greatly enhancing its adaptive ability. PMID:26837272

  8. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs). PMID:22907270

  9. FFT-local gravimetric geoid computation

    NASA Technical Reports Server (NTRS)

    Nagy, Dezso; Fury, Rudolf J.

    1989-01-01

    Model computations show that changes of sampling interval introduce only 0.3 cm changes, whereas zero padding provides an improvement of more than 5 cm in the fast Fourier transformation (FFT) generated geoid. For the Global Positioning System (GPS) survey of Franklin County, Ohio, the parameters selected as a result of model computations, allow large reduction in local data requirements while still retaining the cm accuracy when tapering and padding is applied. The results are shown in tables.

  10. Adaptive mode transition control architecture with an application to unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Gutierrez Zea, Luis Benigno

    In this thesis, an architecture for the adaptive mode transition control of unmanned aerial vehicles (UAV) is presented. The proposed architecture consists of three levels: the highest level is occupied by mission planning routines where information about way points the vehicle must follow is processed. The middle level uses a trajectory generation component to coordinate the task execution and provides set points for low-level stabilizing controllers. The adaptive mode transitioning control algorithm resides at the lowest level of the hierarchy consisting of a mode transitioning controller and the accompanying adaptation mechanism. The mode transition controller is composed of a mode transition manager, a set of local controllers, a set of active control models, a set point filter, a state filter, an automatic trimming mechanism and a dynamic compensation filter. Local controllers operate in local modes and active control models operate in transitions between two local modes. The mode transition manager determines the actual mode of operation of the vehicle based on a set of mode membership functions and activates a local controller or an active control model accordingly. The adaptation mechanism uses an indirect adaptive control methodology to adapt the active control models. For this purpose, a set of plant models based on fuzzy neural networks is trained based on input/output information from the vehicle and used to compute sensitivity matrices providing the linearized models required by the adaptation algorithms. The effectiveness of the approach is verified through software-in-the-loop simulations, hardware-in-the-loop simulations and flight testing.

  11. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  12. Predicting neutron diffusion eigenvalues with a query-based adaptive neural architecture.

    PubMed

    Lysenko, M G; Wong, H I; Maldonado, G I

    1999-01-01

    A query-based approach for adaptively retraining and restructuring a two-hidden-layer artificial neural network (ANN) has been developed for the speedy prediction of the fundamental mode eigenvalue of the neutron diffusion equation, a standard nuclear reactor core design calculation which normally requires the iterative solution of a large-scale system of nonlinear partial differential equations (PDE's). The approach developed focuses primarily upon the adaptive selection of training and cross-validation data and on artificial neural-network (ANN) architecture adjustments, with the objective of improving the accuracy and generalization properties of ANN-based neutron diffusion eigenvalue predictions. For illustration, the performance of a "bare bones" feedforward multilayer perceptron (MLP) is upgraded through a variety of techniques; namely, nonrandom initial training set selection, adjoint function input weighting, teacher-student membership and equivalence queries for generation of appropriate training data, and a dynamic node architecture (DNA) implementation. The global methodology is flexible in that it can "wrap around" any specific training algorithm selected for the static calculations (i.e., training iterations with a fixed training set and architecture). Finally, the improvements obtained are carefully contrasted against past works reported in the literature. PMID:18252578

  13. A Biomimetic Adaptive Algorithm and Low-Power Architecture for Implantable Neural Decoders

    PubMed Central

    Rapoport, Benjamin I.; Wattanapanitch, Woradorn; Penagos, Hector L.; Musallam, Sam; Andersen, Richard A.; Sarpeshkar, Rahul

    2010-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat. PMID:19964345

  14. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies.

    PubMed

    Supple, Megan A; Hines, Heather M; Dasmahapatra, Kanchon K; Lewis, James J; Nielsen, Dahlia M; Lavoie, Christine; Ray, David A; Salazar, Camilo; McMillan, W Owen; Counterman, Brian A

    2013-08-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  15. AdaRTE: adaptable dialogue architecture and runtime engine. A new architecture for health-care dialogue systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2007-01-01

    Spoken dialogue systems have been increasingly employed to provide ubiquitous automated access via telephone to information and services for the non-Internet-connected public. In the health care context, dialogue systems have been successfully applied. Nevertheless, speech-based technology is not easy to implement because it requires a considerable development investment. The advent of VoiceXML for voice applications contributed to reduce the proliferation of incompatible dialogue interpreters, but introduced new complexity. As a response to these issues, we designed an architecture for dialogue representation and interpretation, AdaRTE, which allows developers to layout dialogue interactions through a high level formalism that offers both declarative and procedural features. AdaRTE aim is to provide a ground for deploying complex and adaptable dialogues whilst allows the experimentation and incremental adoption of innovative speech technologies. It provides the dynamic behavior of Augmented Transition Networks and enables the generation of different backends formats such as VoiceXML. It is especially targeted to the health care context, where a framework for easy dialogue deployment could reduce the barrier for a more widespread adoption of dialogue systems. PMID:17911878

  16. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  17. Experimental demonstration of an adaptive architecture for direct spectral imaging classification.

    PubMed

    Dunlop-Gray, Matthew; Poon, Phillip K; Golish, Dathon; Vera, Esteban; Gehm, Michael E

    2016-08-01

    Spectral imaging is a powerful tool for providing in situ material classification across a spatial scene. Typically, spectral imaging analyses are interested in classification, though often the classification is performed only after reconstruction of the spectral datacube. We present a computational spectral imaging system, the Adaptive Feature-Specific Spectral Imaging Classifier (AFSSI-C), which yields direct classification across the spatial scene without reconstruction of the source datacube. With a dual disperser architecture and a programmable spatial light modulator, the AFSSI-C measures specific projections of the spectral datacube which are generated by an adaptive Bayesian classification and feature design framework. We experimentally demonstrate multiple order-of-magnitude improvement of classification accuracy in low signal-to-noise (SNR) environments when compared to legacy spectral imaging systems. PMID:27505794

  18. Designing a meta-level architecture in Java for adaptive parallelism by mobile software agents

    NASA Astrophysics Data System (ADS)

    Dominic, Stephen Victor

    Adaptive parallelism refers to a parallel computation that runs on a pool of processors that may join or withdraw from a running computation. In this dissertation, a functional system of agents and agent behaviors for adaptive parallelism is developed. Software agents have the properties of robustness and have capacity for fault-tolerance. Adaptation and fault-tolerance emerge from the interaction of self-directed autonomous software agents for a parallel computation application. The multi-agent system can be considered an object-oriented system with a higher-level architectural component, i.e., a meta level for agent behavior. The meta-level object architecture is based on patterns of behavior and communication for mobile agents, which are developed to support cooperative problem solving in a distributed-heterogeneous computing environment. Although parallel processing is a suggested application domain for mobile agents implemented in the Java language, the development of robust agent behaviors implemented in an efficient manner is an active research area. Performance characteristics for three versions of a pattern recognition problem are used to demonstrate a linear speed-up with efficiency that is compared to research using a traditional client-server protocol in the C language. The best ideas from existing approaches to adaptive parallelism are used to create a single general-purpose paradigm that overcomes problems associated with nodefailure, the use of a single-centralized or shared resource, requirements for clients to actively join a computation, and a variety of other limitations that are associated with existing systems. The multi-agent system, and experiments, show how adaptation and parallelism can be exploited by a meta-architecture for a distributed-scientific application that is of particular interest to design of signal-processing ground stations. To a large extent the framework separates concern for algorithmic design from concern for where and

  19. Analog circuit design and implementation of an adaptive resonance theory (ART) neural network architecture

    NASA Astrophysics Data System (ADS)

    Ho, Ching S.; Liou, Juin J.; Georgiopoulos, Michael; Heileman, Gregory L.; Christodoulou, Christos G.

    1993-09-01

    This paper presents an analog circuit implementation for an adaptive resonance theory neural network architecture, called the augmented ART-1 neural network (AART1-NN). The AART1-NN is a modification of the popular ART1-NN, developed by Carpenter and Grossberg, and it exhibits the same behavior as the ART1-NN. The AART1-NN is a real-time model, and has the ability to classify an arbitrary set of binary input patterns into different clusters. The design of the AART1-NN model. The circuit is implemented by utilizing analog electronic components, such as, operational amplifiers, transistors, capacitors, and resistors. The implemented circuit is verified using the PSpice circuit simulator, running on Sun workstations. Results obtained from the PSpice circuit simulation compare favorably with simulation results produced by solving the differential equations numerically. The prototype system developed here can be used as a building block for larger AART1-NN architectures, as well as for other types of ART architectures that involve the AART1-NN model.

  20. The Telesupervised Adaptive Ocean Sensor Fleet (TAOSF) Architecture: Coordination of Multiple Oceanic Robot Boats

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Higinbotham, John; Moisan, John R.; Moisan, Tiffany A.; Kulczycki, Eric A.

    2008-01-01

    Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth s climate and ecology. Ocean sensing is typically done with satellites, buoys, and crewed research ships. The limitations of these systems include the fact that satellites are often blocked by cloud cover, and buoys and ships have spatial coverage limitations. This paper describes a multi-robot science exploration software architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF supervises and coordinates a group of robotic boats, the OASIS platforms, to enable in-situ study of phenomena in the ocean/atmosphere interface, as well as on the ocean surface and sub-surface. The OASIS platforms are extended deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). TAOSF allows a human operator to effectively supervise and coordinate multiple robotic assets using a sliding autonomy control architecture, where the operating mode of the vessels ranges from autonomous control to teleoperated human control. TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for robotic asset tasking, control, and monitoring. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). We discuss the overall TAOSF architecture, describe field tests conducted under controlled conditions using rhodamine dye as a HAB simulant, present initial results from these tests, and outline the next steps in the development of TAOSF.

  1. Development and Flight Testing of an Adaptable Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.; Taylor, B. Douglas; Brett, Rube R.

    2003-01-01

    Development and testing of an adaptable wireless health-monitoring architecture for a vehicle fleet is presented. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained adaptable expert system. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate, and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear.

  2. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  3. Algorithms and architectures for adaptive least squares signal processing, with applications in magnetoencephalography

    SciTech Connect

    Lewis, P.S.

    1988-10-01

    Least squares techniques are widely used in adaptive signal processing. While algorithms based on least squares are robust and offer rapid convergence properties, they also tend to be complex and computationally intensive. To enable the use of least squares techniques in real-time applications, it is necessary to develop adaptive algorithms that are efficient and numerically stable, and can be readily implemented in hardware. The first part of this work presents a uniform development of general recursive least squares (RLS) algorithms, and multichannel least squares lattice (LSL) algorithms. RLS algorithms are developed for both direct estimators, in which a desired signal is present, and for mixed estimators, in which no desired signal is available, but the signal-to-data cross-correlation is known. In the second part of this work, new and more flexible techniques of mapping algorithms to array architectures are presented. These techniques, based on the synthesis and manipulation of locally recursive algorithms (LRAs), have evolved from existing data dependence graph-based approaches, but offer the increased flexibility needed to deal with the structural complexities of the RLS and LSL algorithms. Using these techniques, various array architectures are developed for each of the RLS and LSL algorithms and the associated space/time tradeoffs presented. In the final part of this work, the application of these algorithms is demonstrated by their employment in the enhancement of single-trial auditory evoked responses in magnetoencephalography. 118 refs., 49 figs., 36 tabs.

  4. Development and Flight Testing of an Adaptive Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Taylor, B. Douglas; Brett, Rube R.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.

    2002-01-01

    On going development and testing of an adaptable vehicle health-monitoring architecture is presented. The architecture is being developed for a fleet of vehicles. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle, and, a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained expert system. The expert system is parameterized, which makes it adaptable to be trained to both a user's subject reasoning and existing quantitative analytic tools. Communication between all levels is done with wireless radio frequency interfaces. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear. The flight tests were performed to validate the following: the wireless radio frequency communication capabilities of the system, the hardware design, command and control; software operation and, data acquisition, storage and retrieval.

  5. Rice Root Architectural Plasticity Traits and Genetic Regions for Adaptability to Variable Cultivation and Stress Conditions1[OPEN

    PubMed Central

    Sandhu, Nitika; Raman, K. Anitha; Torres, Rolando O.; Audebert, Alain; Dardou, Audrey; Kumar, Arvind; Henry, Amelia

    2016-01-01

    Future rice (Oryza sativa) crops will likely experience a range of growth conditions, and root architectural plasticity will be an important characteristic to confer adaptability across variable environments. In this study, the relationship between root architectural plasticity and adaptability (i.e. yield stability) was evaluated in two traditional × improved rice populations (Aus 276 × MTU1010 and Kali Aus × MTU1010). Forty contrasting genotypes were grown in direct-seeded upland and transplanted lowland conditions with drought and drought + rewatered stress treatments in lysimeter and field studies and a low-phosphorus stress treatment in a Rhizoscope study. Relationships among root architectural plasticity for root dry weight, root length density, and percentage lateral roots with yield stability were identified. Selected genotypes that showed high yield stability also showed a high degree of root plasticity in response to both drought and low phosphorus. The two populations varied in the soil depth effect on root architectural plasticity traits, none of which resulted in reduced grain yield. Root architectural plasticity traits were related to 13 (Aus 276 population) and 21 (Kali Aus population) genetic loci, which were contributed by both the traditional donor parents and MTU1010. Three genomic loci were identified as hot spots with multiple root architectural plasticity traits in both populations, and one locus for both root architectural plasticity and grain yield was detected. These results suggest an important role of root architectural plasticity across future rice crop conditions and provide a starting point for marker-assisted selection for plasticity. PMID:27342311

  6. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  7. A step towards developing adaptive robot-mediated intervention architecture (ARIA) for children with autism.

    PubMed

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R; Crittendon, Julie A; Warren, Zachary E; Sarkar, Nilanjan

    2013-03-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child's head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  8. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  9. Interference fiber ring perimeter with FFT analysis

    NASA Astrophysics Data System (ADS)

    Vasinek, Vladimir; Vitasek, Jan; Hejduk, Stanislav; Bocheza, Jiri; Latal, Jan; Koudelka, Petr

    2011-11-01

    Fiber optical interferometers belong to highly sensitive equipments that are able to measure slight changes like distortion of shape, temperature and electric field variation and etc. Their great advantage is that they are insensitive on ageing component, from which they are composed of. It is in virtue of herewith, that there are evaluated no changes in optical signal intensity but number interference fringes. To monitor the movement of persons, eventually to analyze the changes in state of motion we developed method based on analysis the dynamic changes in interferometric pattern. We have used Mach- Zehnder interferometer with conventional SM and PM fibers excited with the DFB laser at wavelength of 1550 nm. It was terminated with optical receiver containing InGaAs PIN photodiode. Its output was brought into measuring card module that performs on FFT of the received interferometer signal. The signal rises with the composition of two waves passing through single interferometer arm. The optical fiber SMF 28e or PM PANDA fiber in one arm is referential; the second one is positioned on measuring slab at dimensions of 1×2m. A movement of persons over the slab was monitored, signal processed with FFT and frequency spectra were evaluated. They rose owing to dynamic changes of interferometric pattern. The results reflect that the individual subjects passing through slab embody characteristic frequency spectra, which are individual for particular persons. The scope of measuring frequencies proceeded from zero to 10 kHz. At experiments the stability of interferometric patterns was evaluated as from time aspects, so from the view of repeated identical experiments. Two kinds of balls (tennis and ping-pong) were used to plot the repeatability measurements and the gained spectra at repeated drops of balls were compared. Those stroked upon the same place and from the same elevation and dispersion of the obtained frequency spectra was evaluated. These experiments were performed

  10. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training.

    PubMed

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3-5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = -0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  11. Hamstring Architectural and Functional Adaptations Following Long vs. Short Muscle Length Eccentric Training

    PubMed Central

    Guex, Kenny; Degache, Francis; Morisod, Cynthia; Sailly, Matthieu; Millet, Gregoire P.

    2016-01-01

    Most common preventive eccentric-based exercises, such as Nordic hamstring do not include any hip flexion. So, the elongation stress reached is lower than during the late swing phase of sprinting. The aim of this study was to assess the evolution of hamstring architectural (fascicle length and pennation angle) and functional (concentric and eccentric optimum angles and concentric and eccentric peak torques) parameters following a 3-week eccentric resistance program performed at long (LML) vs. short muscle length (SML). Both groups performed eight sessions of 3–5 × 8 slow maximal eccentric knee extensions on an isokinetic dynamometer: the SML group at 0° and the LML group at 80° of hip flexion. Architectural parameters were measured using ultrasound imaging and functional parameters using the isokinetic dynamometer. The fascicle length increased by 4.9% (p < 0.01, medium effect size) in the SML and by 9.3% (p < 0.001, large effect size) in the LML group. The pennation angle did not change (p = 0.83) in the SML and tended to decrease by 0.7° (p = 0.09, small effect size) in the LML group. The concentric optimum angle tended to decrease by 8.8° (p = 0.09, medium effect size) in the SML and by 17.3° (p < 0.01, large effect size) in the LML group. The eccentric optimum angle did not change (p = 0.19, small effect size) in the SML and tended to decrease by 10.7° (p = 0.06, medium effect size) in the LML group. The concentric peak torque did not change in the SML (p = 0.37) and the LML (p = 0.23) groups, whereas eccentric peak torque increased by 12.9% (p < 0.01, small effect size) and 17.9% (p < 0.001, small effect size) in the SML and the LML group, respectively. No group-by-time interaction was found for any parameters. A correlation was found between the training-induced change in fascicle length and the change in concentric optimum angle (r = −0.57, p < 0.01). These results suggest that performing eccentric exercises lead to several architectural and

  12. Helix-length compensation studies reveal the adaptability of the VS ribozyme architecture

    PubMed Central

    Lacroix-Labonté, Julie; Girard, Nicolas; Lemieux, Sébastien; Legault, Pascale

    2012-01-01

    Compensatory mutations in RNA are generally regarded as those that maintain base pairing, and their identification forms the basis of phylogenetic predictions of RNA secondary structure. However, other types of compensatory mutations can provide higher-order structural and evolutionary information. Here, we present a helix-length compensation study for investigating structure–function relationships in RNA. The approach is demonstrated for stem-loop I and stem-loop V of the Neurospora VS ribozyme, which form a kissing–loop interaction important for substrate recognition. To rapidly characterize the substrate specificity (kcat/KM) of several substrate/ribozyme pairs, a procedure was established for simultaneous kinetic characterization of multiple substrates. Several active substrate/ribozyme pairs were identified, indicating the presence of limited substrate promiscuity for stem Ib variants and helix-length compensation between stems Ib and V. 3D models of the I/V interaction were generated that are compatible with the kinetic data. These models further illustrate the adaptability of the VS ribozyme architecture for substrate cleavage and provide global structural information on the I/V kissing–loop interaction. By exploring higher-order compensatory mutations in RNA our approach brings a deeper understanding of the adaptability of RNA structure, while opening new avenues for RNA research. PMID:22086962

  13. Mapping the genomic architecture of adaptive traits with interspecific introgressive origin: a coalescent-based approach.

    PubMed

    Hejase, Hussein A; Liu, Kevin J

    2016-01-01

    Recent studies of eukaryotes including human and Neandertal, mice, and butterflies have highlighted the major role that interspecific introgression has played in adaptive trait evolution. A common question arises in each case: what is the genomic architecture of the introgressed traits? One common approach that can be used to address this question is association mapping, which looks for genotypic markers that have significant statistical association with a trait. It is well understood that sample relatedness can be a confounding factor in association mapping studies if not properly accounted for. Introgression and other evolutionary processes (e.g., incomplete lineage sorting) typically introduce variation among local genealogies, which can also differ from global sample structure measured across all genomic loci. In contrast, state-of-the-art association mapping methods assume fixed sample relatedness across the genome, which can lead to spurious inference. We therefore propose a new association mapping method called Coal-Map, which uses coalescent-based models to capture local genealogical variation alongside global sample structure. Using simulated and empirical data reflecting a range of evolutionary scenarios, we compare the performance of Coal-Map against EIGENSTRAT, a leading association mapping method in terms of its popularity, power, and type I error control. Our empirical data makes use of hundreds of mouse genomes for which adaptive interspecific introgression has recently been described. We found that Coal-Map's performance is comparable or better than EIGENSTRAT in terms of statistical power and false positive rate. Coal-Map's performance advantage was greatest on model conditions that most closely resembled empirically observed scenarios of adaptive introgression. These conditions had: (1) causal SNPs contained in one or a few introgressed genomic loci and (2) varying rates of gene flow - from high rates to very low rates where incomplete lineage

  14. A Fast Conformal Mapping Algorithm with No FFT

    NASA Astrophysics Data System (ADS)

    Luchini, P.; Manzo, F.

    1992-08-01

    An algorithm is presented for the computation of a conformal mapping discretized on a non-uniformly spaced point set, useful for the numerical solution of many problems of fluid dynamics. Most existing iterative techniques, both those having a linear and those having a quadratic type of convergence, rely on the fast Fourier transform ( FFT) algorithm for calculating a convolution integral which represents the most time-consuming phase of the computation. The FFT, however, definitely cannot be applied to a non-uniform spacing. The algorithm presented in this paper has been made possible by the construction of a calculation method for convolution integrals which, despite not using an FFT, maintains a computation time of the same order as that of the FFT. The new technique is successfully applied to the problem of conformally mapping a closely spaced cascade of airfoils onto a circle, which requires an exceedingly large number of points if it is solved with uniform spacing.

  15. The Arab Vernacular Architecture and its Adaptation to Mediterranean Climatic Zones

    NASA Astrophysics Data System (ADS)

    Paz, Shlomit; Hamza, Efat

    2014-05-01

    Throughout history people have employed building strategies adapted to local climatic conditions in an attempt to achieve thermal comfort in their homes. In the Mediterranean climate, a mixed strategy developed - utilizing positive parameters (e.g. natural lighting), while at the same time addressing negative variables (e.g. high temperatures during summer). This study analyzes the adaptation of construction strategies of traditional Arab houses to Mediterranean climatic conditions. It is based on the assumption that the climate of the eastern Mediterranean led to development of unique architectural patterns. The way in which the inhabitants chose to build their homes was modest but creative in the context of climate awareness, with simple ideas. These were often instinctive responses to climate challenges. Nine traditional Arab houses, built from the mid-19th century to the beginning of the 20th century, were analyzed in three different regions in Israel: the "Meshulash" - an area in the center of the country, and the Lower and Upper Galilees (in the north). In each region three houses were examined. It is important to note that only a few houses from these periods still remain, particularly in light of new construction in many of the villages' core areas. Qualitative research methodologies included documentation of all the elements of these traditional houses which were assumed to be a result of climatic factors, such as - house position (direction), thickness of walls, thermal mass, ceiling height, location of windows, natural ventilation, exterior wall colors and shading strategies. Additionally, air temperatures and relative humidity were measured at selected dates throughout all seasons both inside and immediately outside the houses during morning, noon, evening and night-time hours. The documentation of the architectural elements and strategies demonstrate that climatic considerations were an integral part of the planning and construction process of these

  16. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  17. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  18. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  19. Engine Fault Diagnosis using DTW, MFCC and FFT

    NASA Astrophysics Data System (ADS)

    Singh, Vrijendra; Meena, Narendra

    . In this paper we have used a combination of three algorithms: Dynamic time warping (DTW) and the coefficients of Mel frequency Cepstrum (MFC) and Fast Fourier Transformation (FFT) for classifying various engine faults. Dynamic time warping and MFCC (Mel Frequency Cepstral Coefficients), FFT are used usually for automatic speech recognition purposes. This paper introduces DTW algorithm and the coefficients extracted from Mel Frequency Cepstrum, FFT for automatic fault detection and identification (FDI) of internal combustion engines for the first time. The objective of the current work was to develop a new intelligent system that should be able to predict the possible fault in a running engine at different-different workshops. We are doing this first time. Basically we took different-different samples of Engine fault and applied these algorithms, extracted features from it and used Fuzzy Rule Base approach for fault Classification.

  20. Adapted Verbal Feedback, Instructor Interaction and Student Emotions in the Landscape Architecture Studio

    ERIC Educational Resources Information Center

    Smith, Carl A.; Boyer, Mark E.

    2015-01-01

    In light of concerns with architectural students' emotional jeopardy during traditional desk and final-jury critiques, the authors pursue alternative approaches intended to provide more supportive and mentoring verbal assessment in landscape architecture studios. In addition to traditional studio-based critiques throughout a semester, we provide…

  1. Interpolation And FFT Of Near-Field Antenna Measurements

    NASA Technical Reports Server (NTRS)

    Gatti, Mark S.; Rahmat-Samii, Yahya

    1990-01-01

    Bivariate Lagrange interpolation applied to plane-polar measurement scans. Report discusses recent advances in application of fast-Fourier-transform (FFT) techniques to measurements of near radiation fields of antennas on plane-polar grid. Attention focused mainly on use of such measurements to calculate far radiation fields. Also discussion of use of FFT's in holographic diagnosis of distortions of antenna reflectors. Advantage of scheme, it speeds calculations because it requires fewer data and manipulations of data than other schemes used for this purpose.

  2. When History Repeats Itself: Exploring the Genetic Architecture of Host-Plant Adaptation in Two Closely Related Lepidopteran Species

    PubMed Central

    Alexandre, Hermine; Ponsard, Sergine; Bourguet, Denis; Vitalis, Renaud; Audiot, Philippe; Cros-Arteil, Sandrine; Streiff, Réjane

    2013-01-01

    The genus Ostrinia includes two allopatric maize pests across Eurasia, namely the European corn borer (ECB, O. nubilalis) and the Asian corn borer (ACB, O. furnacalis). A third species, the Adzuki bean borer (ABB, O. scapulalis), occurs in sympatry with both the ECB and the ACB. The ABB mostly feeds on native dicots, which probably correspond to the ancestral host plant type for the genus Ostrinia. This situation offers the opportunity to characterize the two presumably independent adaptations or preadaptations to maize that occurred in the ECB and ACB. In the present study, we aimed at deciphering the genetic architecture of these two adaptations to maize, a monocot host plant recently introduced into Eurasia. To this end, we performed a genome scan analysis based on 684 AFLP markers in 12 populations of ECB, ACB and ABB. We detected 2 outlier AFLP loci when comparing French populations of the ECB and ABB, and 9 outliers when comparing Chinese populations of the ACB and ABB. These outliers were different in both countries, and we found no evidence of linkage disequilibrium between any two of them. These results suggest that adaptation or preadaptation to maize relies on a different genetic architecture in the ECB and ACB. However, this conclusion must be considered in light of the constraints inherent to genome scan approaches and of the intricate evolution of adaptation and reproductive isolation in the Ostrinia spp. complex. PMID:23874914

  3. Automatic Synthesis of Cost Effective FFT/IFFT Cores for VLSI OFDM Systems

    NASA Astrophysics Data System (ADS)

    L'Insalata, Nicola E.; Saponara, Sergio; Fanucci, Luca; Terreni, Pierangelo

    This work presents an FFT/IFFT core compiler particularly suited for the VLSI implementation of OFDM communication systems. The tool employs an architecture template based on the pipelined cascade principle. The generated cores support run-time programmable length and transform type selection, enabling seamless integration into multiple mode and multiple standard terminals. A distinctive feature of the tool is its accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results of the generated macrocells are presented for two deep sub-micron standard-cells libraries (65 and 90nm) and commercially available FPGA devices. When compared with other tools for automatic FFT core generation, the proposed environment produces macrocells with lower circuit complexity expressed as gate count and RAM/ROM bits, while keeping the same system level performance in terms of throughput, transform size and numerical accuracy.

  4. Bone architecture adaptations after spinal cord injury: impact of long-term vibration of a constrained lower limb

    PubMed Central

    Dudley-Javoroski, S.; Petrie, M. A.; McHenry, C. L.; Amelon, R. E.; Saha, P. K.

    2015-01-01

    Summary This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. Introduction The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Methods Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Results Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a “steady state” of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. Conclusions This study supports that vibration training, using this study’s dose parameters, is not an effective antiosteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury. PMID:26395887

  5. An adaptable architecture for patient cohort identification from diverse data sources

    PubMed Central

    Bache, Richard; Miles, Simon; Taweel, Adel

    2013-01-01

    Objective We define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them. Method The architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented. Results We show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts. Discussion Although the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor. Conclusions The architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity. PMID:24064442

  6. Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes

    PubMed Central

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  7. Adaptive software architecture based on confident HCI for the deployment of sensitive services in Smart Homes.

    PubMed

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  8. A game-theoretic architecture for visible watermarking system of ACOCOA (adaptive content and contrast aware) technique

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Jen; Liu, Jung

    2011-12-01

    Digital watermarking techniques have been developed to protect the intellectual property. A digital watermarking system is basically judged based on two characteristics: security robustness and image quality. In order to obtain a robust visible watermarking in practice, we present a novel watermarking algorithm named adaptive content and contrast aware (ACOCOA), which considers the host image content and watermark texture. In addition, we propose a powerful security architecture against attacks for visible watermarking system which is based on game-theoretic approach that provides an equilibrium condition solution for the decision maker by studying the effects of transmission power on intensity and perceptual efficiency. The experimental results demonstrate that the feasibility of the proposed approach not only provides effectiveness and robustness for the watermarked images, but also allows the watermark encoder to obtain the best adaptive watermarking strategy under attacks.

  9. Architectural Models of Adaptive Hypermedia Based on the Use of Ontologies

    ERIC Educational Resources Information Center

    Souhaib, Aammou; Mohamed, Khaldi; Eddine, El Kadiri Kamal

    2011-01-01

    The domain of traditional hypermedia is revolutionized by the arrival of the concept of adaptation. Currently, the domain of AHS (adaptive hypermedia systems) is constantly growing. A major goal of current research is to provide a personalized educational experience that meets the needs specific to each learner (knowledge level, goals, motivation,…

  10. Saliency detection for videos using 3D FFT local spectra

    NASA Astrophysics Data System (ADS)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  11. Real-Time, Polyphase-FFT, 640-MHz Spectrum Analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, George A.; Garyantes, Michael F.; Grimm, Michael J.; Charny, Bentsian; Brown, Randy D.; Wilck, Helmut C.

    1994-01-01

    Real-time polyphase-fast-Fourier-transform, polyphase-FFT, spectrum analyzer designed to aid in detection of multigigahertz radio signals in two 320-MHz-wide polarization channels. Spectrum analyzer divides total spectrum of 640 MHz into 33,554,432 frequency channels of about 20 Hz each. Size and cost of polyphase-coefficient memory substantially reduced and much of processing loss of windowed FFTs eliminated.

  12. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  13. The genomic architecture and association genetics of adaptive characters using a candidate SNP approach in boreal black spruce

    PubMed Central

    2013-01-01

    Background The genomic architecture of adaptive traits remains poorly understood in non-model plants. Various approaches can be used to bridge this gap, including the mapping of quantitative trait loci (QTL) in pedigrees, and genetic association studies in non-structured populations. Here we present results on the genomic architecture of adaptive traits in black spruce, which is a widely distributed conifer of the North American boreal forest. As an alternative to the usual candidate gene approach, a candidate SNP approach was developed for association testing. Results A genetic map containing 231 gene loci was used to identify QTL that were related to budset timing and to tree height assessed over multiple years and sites. Twenty-two unique genomic regions were identified, including 20 that were related to budset timing and 6 that were related to tree height. From results of outlier detection and bulk segregant analysis for adaptive traits using DNA pool sequencing of 434 genes, 52 candidate SNPs were identified and subsequently tested in genetic association studies for budset timing and tree height assessed over multiple years and sites. A total of 34 (65%) SNPs were significantly associated with budset timing, or tree height, or both. Although the percentages of explained variance (PVE) by individual SNPs were small, several significant SNPs were shared between sites and among years. Conclusions The sharing of genomic regions and significant SNPs between budset timing and tree height indicates pleiotropic effects. Significant QTLs and SNPs differed quite greatly among years, suggesting that different sets of genes for the same characters are involved at different stages in the tree’s life history. The functional diversity of genes carrying significant SNPs and low observed PVE further indicated that a large number of polymorphisms are involved in adaptive genetic variation. Accordingly, for undomesticated species such as black spruce with natural populations

  14. Unexpectedly low nitrogen acquisition and absence of root architecture adaptation to nitrate supply in a Medicago truncatula highly branched root mutant

    PubMed Central

    Bourion, Virginie

    2014-01-01

    To complement N2 fixation through symbiosis, legumes can efficiently acquire soil mineral N through adapted root architecture. However, root architecture adaptation to mineral N availability has been little studied in legumes. Therefore, this study investigated the effect of nitrate availability on root architecture in Medicago truncatula and assessed the N-uptake potential of a new highly branched root mutant, TR185. The effects of varying nitrate supply on both root architecture and N uptake were characterized in the mutant and in the wild type. Surprisingly, the root architecture of the mutant was not modified by variation in nitrate supply. Moreover, despite its highly branched root architecture, TR185 had a permanently N-starved phenotype. A transcriptome analysis was performed to identify genes differentially expressed between the two genotypes. This analysis revealed differential responses related to the nitrate acquisition pathway and confirmed that N starvation occurred in TR185. Changes in amino acid content and expression of genes involved in the phenylpropanoid pathway were associated with differences in root architecture between the mutant and the wild type. PMID:24706718

  15. Academic Accountability and University Adaptation: The Architecture of an Academic Learning Organization.

    ERIC Educational Resources Information Center

    Dill, David D.

    1999-01-01

    Discussses various adaptations in organizational structure and governance of academic learning institutions, using case studies of universities that are attempting to improve the quality of teaching and the learning process. Identifies five characteristics typical of such organizations: (1) a culture of evidence; (2) improved coordination of…

  16. Architecture for an Adaptive and Intelligent Tutoring System That Considers the Learner's Multiple Intelligences

    ERIC Educational Resources Information Center

    Hafidi, Mohamed; Bensebaa, Taher

    2015-01-01

    The majority of adaptive and intelligent tutoring systems (AITS) are dedicated to a specific domain, allowing them to offer accurate models of the domain and the learner. The analysis produced from traces left by the users is didactically very precise and specific to the domain in question. It allows one to guide the learner in case of difficulty…

  17. Elucidating the molecular architecture of adaptation via evolve and resequence experiments

    PubMed Central

    Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier

    2016-01-01

    Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, followed by next-generation sequencing. They enable molecular evolution to be monitored in real time at a genome-wide scale. We review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and complex multicellular Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different: starting population sizes, levels of pre-existing genetic variation, recombination rates, and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address. PMID:26347030

  18. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    A key constituent of the NASA Space Launch System (SLS) architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). This engine was selected largely due to the maturity and extensive experience gained through 30-plus years of service. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle and mounting it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration and operational details, there were also hardware upgrades needed. While the magnitude of effort is less than that needed to develop a new clean-sheet engine system, this paper describes some of the expected and unexpected challenges encountered to date on the path to the first flight of SLS.

  19. Adaptive functional specialisation of architectural design and fibre type characteristics in agonist shoulder flexor muscles of the llama, Lama glama.

    PubMed

    Graziotti, Guillermo H; Chamizo, Verónica E; Ríos, Clara; Acevedo, Luz M; Rodríguez-Menéndez, J M; Victorica, C; Rivero, José-Luis L

    2012-08-01

    Like other camelids, llamas (Lama glama) have the natural ability to pace (moving ipsilateral limbs in near synchronicity). But unlike the Old World camelids (bactrian and dromedary camels), they are well adapted for pacing at slower or moderate speeds in high-altitude habitats, having been described as good climbers and used as pack animals for centuries. In order to gain insight into skeletal muscle design and to ascertain its relationship with the llama's characteristic locomotor behaviour, this study examined the correspondence between architecture and fibre types in two agonist muscles involved in shoulder flexion (M. teres major - TM and M. deltoideus, pars scapularis - DS and pars acromialis - DA). Architectural properties were found to be correlated with fibre-type characteristics both in DS (long fibres, low pinnation angle, fast-glycolytic fibre phenotype with abundant IIB fibres, small fibre size, reduced number of capillaries per fibre and low oxidative capacity) and in DA (short fibres, high pinnation angle, slow-oxidative fibre phenotype with numerous type I fibres, very sparse IIB fibres, and larger fibre size, abundant capillaries and high oxidative capacity). This correlation suggests a clear division of labour within the M. deltoideus of the llama, DS being involved in rapid flexion of the shoulder joint during the swing phase of the gait, and DA in joint stabilisation during the stance phase. However, the architectural design of the TM muscle (longer fibres and lower fibre pinnation angle) was not strictly matched with its fibre-type characteristics (very similar to those of the postural DA muscle). This unusual design suggests a dual function of the TM muscle both in active flexion of the shoulder and in passive support of the limb during the stance phase, pulling the forelimb to the trunk. This functional specialisation seems to be well suited to a quadruped species that needs to increase ipsilateral stability of the limb during the support

  20. Adaptive functional specialisation of architectural design and fibre type characteristics in agonist shoulder flexor muscles of the llama, Lama glama

    PubMed Central

    Graziotti, Guillermo H; Chamizo, Verónica E; Ríos, Clara; Acevedo, Luz M; Rodríguez-Menéndez, J M; Victorica, C; Rivero, José-Luis L

    2012-01-01

    Like other camelids, llamas (Lama glama) have the natural ability to pace (moving ipsilateral limbs in near synchronicity). But unlike the Old World camelids (bactrian and dromedary camels), they are well adapted for pacing at slower or moderate speeds in high-altitude habitats, having been described as good climbers and used as pack animals for centuries. In order to gain insight into skeletal muscle design and to ascertain its relationship with the llama’s characteristic locomotor behaviour, this study examined the correspondence between architecture and fibre types in two agonist muscles involved in shoulder flexion (M. teres major – TM and M. deltoideus, pars scapularis – DS and pars acromialis – DA). Architectural properties were found to be correlated with fibre-type characteristics both in DS (long fibres, low pinnation angle, fast-glycolytic fibre phenotype with abundant IIB fibres, small fibre size, reduced number of capillaries per fibre and low oxidative capacity) and in DA (short fibres, high pinnation angle, slow-oxidative fibre phenotype with numerous type I fibres, very sparse IIB fibres, and larger fibre size, abundant capillaries and high oxidative capacity). This correlation suggests a clear division of labour within the M. deltoideus of the llama, DS being involved in rapid flexion of the shoulder joint during the swing phase of the gait, and DA in joint stabilisation during the stance phase. However, the architectural design of the TM muscle (longer fibres and lower fibre pinnation angle) was not strictly matched with its fibre-type characteristics (very similar to those of the postural DA muscle). This unusual design suggests a dual function of the TM muscle both in active flexion of the shoulder and in passive support of the limb during the stance phase, pulling the forelimb to the trunk. This functional specialisation seems to be well suited to a quadruped species that needs to increase ipsilateral stability of the limb during the

  1. The Architecture of Iron Microbial Mats Reflects the Adaptation of Chemolithotrophic Iron Oxidation in Freshwater and Marine Environments

    PubMed Central

    Chan, Clara S.; McAllister, Sean M.; Leavitt, Anna H.; Glazer, Brian T.; Krepski, Sean T.; Emerson, David

    2016-01-01

    Microbes form mats with architectures that promote efficient metabolism within a particular physicochemical environment, thus studying mat structure helps us understand ecophysiology. Despite much research on chemolithotrophic Fe-oxidizing bacteria, Fe mat architecture has not been visualized because these delicate structures are easily disrupted. There are striking similarities between the biominerals that comprise freshwater and marine Fe mats, made by Beta- and Zetaproteobacteria, respectively. If these biominerals are assembled into mat structures with similar functional morphology, this would suggest that mat architecture is adapted to serve roles specific to Fe oxidation. To evaluate this, we combined light, confocal, and scanning electron microscopy of intact Fe microbial mats with experiments on sheath formation in culture, in order to understand mat developmental history and subsequently evaluate the connection between Fe oxidation and mat morphology. We sampled a freshwater sheath mat from Maine and marine stalk and sheath mats from Loihi Seamount hydrothermal vents, Hawaii. Mat morphology correlated to niche: stalks formed in steeper O2 gradients while sheaths were associated with low to undetectable O2 gradients. Fe-biomineralized filaments, twisted stalks or hollow sheaths, formed the highly porous framework of each mat. The mat-formers are keystone species, with nascent marine stalk-rich mats comprised of novel and uncommon Zetaproteobacteria. For all mats, filaments were locally highly parallel with similar morphologies, indicating that cells were synchronously tracking a chemical or physical cue. In the freshwater mat, cells inhabited sheath ends at the growing edge of the mat. Correspondingly, time lapse culture imaging showed that sheaths are made like stalks, with cells rapidly leaving behind an Fe oxide filament. The distinctive architecture common to all observed Fe mats appears to serve specific functions related to chemolithotrophic Fe

  2. The Architecture of Iron Microbial Mats Reflects the Adaptation of Chemolithotrophic Iron Oxidation in Freshwater and Marine Environments.

    PubMed

    Chan, Clara S; McAllister, Sean M; Leavitt, Anna H; Glazer, Brian T; Krepski, Sean T; Emerson, David

    2016-01-01

    Microbes form mats with architectures that promote efficient metabolism within a particular physicochemical environment, thus studying mat structure helps us understand ecophysiology. Despite much research on chemolithotrophic Fe-oxidizing bacteria, Fe mat architecture has not been visualized because these delicate structures are easily disrupted. There are striking similarities between the biominerals that comprise freshwater and marine Fe mats, made by Beta- and Zetaproteobacteria, respectively. If these biominerals are assembled into mat structures with similar functional morphology, this would suggest that mat architecture is adapted to serve roles specific to Fe oxidation. To evaluate this, we combined light, confocal, and scanning electron microscopy of intact Fe microbial mats with experiments on sheath formation in culture, in order to understand mat developmental history and subsequently evaluate the connection between Fe oxidation and mat morphology. We sampled a freshwater sheath mat from Maine and marine stalk and sheath mats from Loihi Seamount hydrothermal vents, Hawaii. Mat morphology correlated to niche: stalks formed in steeper O2 gradients while sheaths were associated with low to undetectable O2 gradients. Fe-biomineralized filaments, twisted stalks or hollow sheaths, formed the highly porous framework of each mat. The mat-formers are keystone species, with nascent marine stalk-rich mats comprised of novel and uncommon Zetaproteobacteria. For all mats, filaments were locally highly parallel with similar morphologies, indicating that cells were synchronously tracking a chemical or physical cue. In the freshwater mat, cells inhabited sheath ends at the growing edge of the mat. Correspondingly, time lapse culture imaging showed that sheaths are made like stalks, with cells rapidly leaving behind an Fe oxide filament. The distinctive architecture common to all observed Fe mats appears to serve specific functions related to chemolithotrophic Fe

  3. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  4. SSME to RS-25: Challenges of Adapting a Heritage Engine to a New Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2015-01-01

    Following the cancellation of the Constellation program and retirement of the Space Shuttle, NASA initiated the Space Launch System (SLS) program to provide next-generation heavy lift cargo and crew access to space. A key constituent of the SLS architecture is the RS-25 engine, also known as the Space Shuttle Main Engine (SSME). The RS-25 was selected to serve as the main propulsion system for the SLS core stage in conjunction with the solid rocket boosters. This selection was largely based on the maturity and extensive experience gained through 135 missions, 3000+ ground tests, and over a million seconds total accumulated hot-fire time. In addition, there were also over a dozen functional flight assets remaining from the Space Shuttle program that could be leveraged to support the first four flights. However, while the RS-25 is a highly mature system, simply unbolting it from the Space Shuttle boat-tail and installing it on the new SLS vehicle is not a "plug-and-play" operation. In addition to numerous technical integration details involving changes to significant areas such as the environments, interface conditions, technical performance requirements, operational constraints and so on, there were other challenges to be overcome in the area of replacing the obsolete engine control system (ECS). While the magnitude of accomplishing this effort was less than that needed to develop and field a new clean-sheet engine system, the path to the first flight of SLS has not been without unexpected challenges.

  5. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  6. STS-29 MS Bagian during post landing egress exercises in JSC FFT mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-29 Discovery, Orbiter Vehicle (OV) 103, Mission Specialist (MS) James P. Bagian practices post landing egress via overhead window W8 in JSC full fuselage trainer (FFT) located in the Mockup and Integration Laboratory Bldg 9A. Bagian, wearing navy blue launch and entry suit (LES) and launch and entry helmet (LEH), lowers himself using the sky genie down FFT side. Technicians watch Bagian's progress from outside FFT and inside FFT at open side hatch. Visible on Bagian's right side is the personal egress air pack (PEAP). Bagian along with the engineers are evaluating egress using the new crew escape system (CES) equipment (including parachute harness).

  7. Molecular views of damaged DNA: Adaptation of the Program DUPLEX to parallel architectures

    SciTech Connect

    Hingerty, B.E.; Crawford, O.H.; Broyde, S.; Wagner, R.A.

    1994-09-01

    The nucleic acids molecular mechanics program DUPLEX has been designed with useful features for surveying the potential energy surface of polynucleotides, especially ones that are modified by polycyclic aromatic carcinogens. The program features helpful strategies for addressing the multiple minimum problem: (1) the reduced variable domain of torsion angle space; (2) search strategies that emphasize large scale searches for smaller subunits, followed by building to larger units by a variety of strategies; (3) the use of penalty functions to aid the minimizer in locating selected structural types in first stage minimizations; penalty functions are released in terminal minimizations to yield final unrestrained minimum energy conformations. Predictive capability is illustrated by DNA modified by activated benzo[a]pyrenes. The first stage of adaptation to parallel computers is described.

  8. Human Behavior & Low Energy Architecture: Linking Environmental Adaptation, Personal Comfort, & Energy Use in the Built Environment

    NASA Astrophysics Data System (ADS)

    Langevin, Jared

    Truly sustainable buildings serve to enrich the daily sensory experience of their human inhabitants while consuming the least amount of energy possible; yet, building occupants and their environmentally adaptive behaviors remain a poorly characterized variable in even the most "green" building design and operation approaches. This deficiency has been linked to gaps between predicted and actual energy use, as well as to eventual problems with occupant discomfort, productivity losses, and health issues. Going forward, better tools are needed for considering the human-building interaction as a key part of energy efficiency strategies that promote good Indoor Environmental Quality (IEQ) in buildings. This dissertation presents the development and implementation of a Human and Building Interaction Toolkit (HABIT), a framework for the integrated simulation of office occupants' thermally adaptive behaviors, IEQ, and building energy use as part of sustainable building design and operation. Development of HABIT begins with an effort to devise more reliable methods for predicting individual occupants' thermal comfort, considered the driving force behind the behaviors of focus for this project. A long-term field study of thermal comfort and behavior is then presented, and the data it generates are used to develop and validate an agent-based behavior simulation model. Key aspects of the agent-based behavior model are described, and its predictive abilities are shown to compare favorably to those of multiple other behavior modeling options. Finally, the agent-based behavior model is linked with whole building energy simulation in EnergyPlus, forming the full HABIT program. The program is used to evaluate the energy and IEQ impacts of several occupant behavior scenarios in the simulation of a case study office building for the Philadelphia climate. Results indicate that more efficient local heating/cooling options may be paired with wider set point ranges to yield up to 24

  9. Range of motion, neuromechanical, and architectural adaptations to plantar flexor stretch training in humans.

    PubMed

    Blazevich, A J; Cannavan, D; Waugh, C M; Miller, S C; Thorlund, J B; Aagaard, P; Kay, A D

    2014-09-01

    The neuromuscular adaptations in response to muscle stretch training have not been clearly described. In the present study, changes in muscle (at fascicular and whole muscle levels) and tendon mechanics, muscle activity, and spinal motoneuron excitability were examined during standardized plantar flexor stretches after 3 wk of twice daily stretch training (4 × 30 s). No changes were observed in a nonexercising control group (n = 9), however stretch training elicited a 19.9% increase in dorsiflexion range of motion (ROM) and a 28% increase in passive joint moment at end ROM (n = 12). Only a trend toward a decrease in passive plantar flexor moment during stretch (-9.9%; P = 0.15) was observed, and no changes in electromyographic amplitudes during ROM or at end ROM were detected. Decreases in H(max):M(max) (tibial nerve stimulation) were observed at plantar flexed (gastrocnemius medialis and soleus) and neutral (soleus only) joint angles, but not with the ankle dorsiflexed. Muscle and fascicle strain increased (12 vs. 23%) along with a decrease in muscle stiffness (-18%) during stretch to a constant target joint angle. Muscle length at end ROM increased (13%) without a change in fascicle length, fascicle rotation, tendon elongation, or tendon stiffness following training. A lack of change in maximum voluntary contraction moment and rate of force development at any joint angle was taken to indicate a lack of change in series compliance of the muscle-tendon unit. Thus, increases in end ROM were underpinned by increases in maximum tolerable passive joint moment (stretch tolerance) and both muscle and fascicle elongation rather than changes in volitional muscle activation or motoneuron pool excitability. PMID:24947023

  10. a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2013-02-01

    In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.

  11. Peeping into genomic architecture by re-sequencing of Ochrobactrum intermedium M86 strain during laboratory adapted conditions.

    PubMed

    Gohil, Kushal N; Neurgaonkar, Priya S; Paranjpe, Aditi; Dastager, Syed G; Dharne, Mahesh S

    2016-06-01

    Advances in de novo sequencing technologies allow us to track deeper insights into microbial genomes for restructuring events during the course of their evolution inside and outside the host. Bacterial species belonging to Ochrobactrum genus are being reported as emerging, and opportunistic pathogens in this technology driven era probably due to insertion and deletion of genes. The Ochrobactrum intermedium M86 was isolated in 2005 from a case of non-ulcer dyspeptic human stomach followed by its first draft genome sequence in 2009. Here we report re-sequencing of O. intermedium M86 laboratory adapted strain in terms of gain and loss of genes. We also attempted for finer scale genome sequence with 10 times more genome coverage than earlier one followed by comparative evaluation on Ion PGM and Illumina MiSeq. Despite their similarities at genomic level, lab-adapted strain mainly lacked genes encoding for transposase protein, insertion elements family, phage tail-proteins that were not detected in original strain on both chromosomes. Interestingly, a 5 kb indel was detected in chromosome 2 that was absent in original strain mapped with phage integrase gene of Rhizobium spp. and may be acquired and integrated through horizontal gene transfer indicating the gene loss and gene gain phenomenon in this genus. Majority of indel fragments did not match with known genes indicating more bioinformatic dissection of this fragment. Additionally we report genes related to antibiotic resistance, heavy metal tolerance in earlier and re-sequenced strain. Though SNPs detected, there did not span urease and flagellar genes. We also conclude that third generation sequencing technologies might be useful for understanding genomic architecture and re-arrangement of genes in the genome due to their ability of larger coverage that can be used to trace evolutionary aspects in microbial system. PMID:27222803

  12. Adaptive line enhancers for fast acquisition

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1994-01-01

    Three adaptive line enhancer (ALE) algorithms and architectures - namely, conventional ALE, ALE with double filtering, and ALE with coherent accumulation - are investigated for fast carrier acquisition in the time domain. The advantages of these algorithms are their simplicity, flexibility, robustness, and applicability to general situations including the Earth-to-space uplink carrier acquisition and tracking of the spacecraft. In the acquisition mode, these algorithms act as bandpass filters; hence, the carrier-to-noise ratio (CNR) is improved for fast acquisition. In the tracking mode, these algorithms simply act as lowpass filters to improve signal-to-noise ratio; hence, better tracking performance is obtained. It is not necessary to have a priori knowledge of the received signal parameters, such as CNR, Doppler, and carrier sweeping rate. The implementation of these algorithms is in the time domain (as opposed to the frequency domain, such as the fast Fourier transform (FFT)). The carrier frequency estimation can be updated in real time at each time sample (as opposed to the batch processing of the FFT). The carrier frequency to be acquired can be time varying, and the noise can be non-Gaussian, nonstationary, and colored.

  13. A Method for Finding Unknown Signals Using Reinforcement FFT Differencing

    SciTech Connect

    Charles R. Tolle; John W. James

    2009-12-01

    This note addresses a simple yet powerful method of discovering the spectral character of an unknown but intermittent signal buried in a background made up of a distribution of other signals. Knowledge of when the unknown signal is present and when it is not, along with samples of the combined signal when the unknown signal is present and when it is not are all that is necessary for this method. The method is based on reinforcing Fast Fourier Transform (FFT) power spectra when the signal of interest occurs and subtracting spectra when it does not. Several examples are presented. This method could be used to discover spectral components of unknown chemical species within spectral analysis instruments such as Mass Spectroscopy, Fourier Transform Infrared Spectroscopy (FTIR) and Gas Chromatography. In addition, this method can be used to isolate device loading signatures on power transmission lines.

  14. Efficient modelling of gravity effects due to topographic masses using the Gauss-FFT method

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2016-04-01

    We present efficient Fourier-domain algorithms for modelling gravity effects due to topographic masses. The well-known Parker's formula originally based on the standard fast Fourier transform (FFT) algorithm is modified by applying the Gauss-FFT method instead. Numerical precision of the forward and inverse Fourier transforms embedded in Parker's formula and its extended forms are significantly improved by the Gauss-FFT method. The topographic model is composed of two major aspects, the geometry and the density. Versatile geometric representations, including the mass line model, the mass prism model, the polyhedron model and smoother topographic models interpolated from discrete data sets using high-order splines or pre-defined by analytical functions, in combination with density distributions that vary both laterally and vertically in rather arbitrary ways following exponential or general polynomial functions, now can be treated in a consistent framework by applying the Gauss-FFT method. The method presented has been numerically checked by space-domain analytical and hybrid analytical/numerical solutions already established in the literature. Synthetic and real model tests show that both the Gauss-FFT method and the standard FFT method run much faster than space-domain solutions, with the Gauss-FFT method being superior in numerical accuracy. When truncation errors are negligible, the Gauss-FFT method can provide forward results almost identical to space-domain analytical or semi-numerical solutions in much less time.

  15. Architectural and Biochemical Adaptations in Skeletal Muscle and Bone Following Rotator Cuff Injury in a Rat Model

    PubMed Central

    Sato, Eugene J.; Killian, Megan L.; Choi, Anthony J.; Lin, Evie; Choo, Alexander D.; Rodriguez-Soto, Ana E.; Lim, Chanteak T.; Thomopoulos, Stavros; Galatz, Leesa M.; Ward, Samuel R.

    2015-01-01

    Background: Injury to the rotator cuff can cause irreversible changes to the structure and function of the associated muscles and bones. The temporal progression and pathomechanisms associated with these adaptations are unclear. The purpose of this study was to investigate the time course of structural muscle and osseous changes in a rat model of a massive rotator cuff tear. Methods: Supraspinatus and infraspinatus muscle architecture and biochemistry and humeral and scapular morphological parameters were measured three days, eight weeks, and sixteen weeks after dual tenotomy with and without chemical paralysis via botulinum toxin A (BTX). Results: Muscle mass and physiological cross-sectional area increased over time in the age-matched control animals, decreased over time in the tenotomy+BTX group, and remained nearly the same in the tenotomy-alone group. Tenotomy+BTX led to increased extracellular collagen in the muscle. Changes in scapular bone morphology were observed in both experimental groups, consistent with reductions in load transmission across the joint. Conclusions: These data suggest that tenotomy alone interferes with normal age-related muscle growth. The addition of chemical paralysis yielded profound structural changes to the muscle and bone, potentially leading to impaired muscle function, increased muscle stiffness, and decreased bone strength. Clinical Relevance: Structural musculoskeletal changes occur after tendon injury, and these changes are severely exacerbated with the addition of neuromuscular compromise. PMID:25834081

  16. Adaptable dialog architecture and runtime engine (AdaRTE): a framework for rapid prototyping of health dialog systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2009-04-01

    Spoken dialog systems have been increasingly employed to provide ubiquitous access via telephone to information and services for the non-Internet-connected public. They have been successfully applied in the health care context; however, speech technology requires a considerable development investment. The advent of VoiceXML reduced the proliferation of incompatible dialog formalisms, at the expense of adding even more complexity. This paper introduces a novel architecture for dialogue representation and interpretation, AdaRTE, which allows developers to lay out dialog interactions through a high-level formalism, offering both declarative and procedural features. AdaRTE's aim is to provide a ground for deploying complex and adaptable dialogs whilst allowing experimentation and incremental adoption of innovative speech technologies. It enhances augmented transition networks with dynamic behavior, and drives multiple back-end realizers, including VoiceXML. It has been especially targeted to the health care context, because of the great scale and the need for reducing the barrier to a widespread adoption of dialog systems. PMID:18799352

  17. An 8×8/4×4 Adaptive Hadamard Transform Based FME VLSI Architecture for 4K×2K H.264/AVC Encoder

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Liu, Jialiang; Zhang, Dexue; Zeng, Xiaoyang; Chen, Xinhua

    Fidelity Range Extension (FRExt) (i.e. High Profile) was added to the H.264/AVC recommendation in the second version. One of the features included in FRExt is the Adaptive Block-size Transform (ABT). In order to conform to the FRExt, a Fractional Motion Estimation (FME) architecture is proposed to support the 8×8/4×4 adaptive Hadamard Transform (8×8/4×4 AHT). The 8×8/4×4 AHT circuit contributes to higher throughput and encoding performance. In order to increase the utilization of SATD (Sum of Absolute Transformed Difference) Generator (SG) in unit time, the proposed architecture employs two 8-pel interpolators (IP) to time-share one SG. These two IPs can work in turn to provide the available data continuously to the SG, which increases the data throughput and significantly reduces the cycles that are needed to process one Macroblock. Furthermore, this architecture also exploits the linear feature of Hadamard Transform to generate the quarter-pel SATD. This method could help to shorten the long datapath in the second-step of two-iteration FME algorithm. Finally, experimental results show that this architecture could be used in the applications requiring different performances by adjusting the supported modes and operation frequency. It can support the real-time encoding of the seven-mode 4K×2K@24fps or six-mode 4K×2K@30fps video sequences.

  18. The TurboLAN project. Phase 1: Protocol choices for high speed local area networks. Phase 2: TurboLAN Intelligent Network Adapter Card, (TINAC) architecture

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The hardware and the software architecture of the TurboLAN Intelligent Network Adapter Card (TINAC) are described. A high level as well as detailed treatment of the workings of various components of the TINAC are presented. The TINAC is divided into the following four major functional units: (1) the network access unit (NAU); (2) the buffer management unit; (3) the host interface unit; and (4) the node processor unit.

  19. A reconfigurable multicarrier demodulator architecture

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Jamali, M. M.

    1991-01-01

    An architecture based on parallel and pipline design approaches has been developed for the Frequency Division Multiple Access/Time Domain Multiplexed (FDMA/TDM) conversion system. The architecture has two main modules namely the transmultiplexer and the demodulator. The transmultiplexer has two pipelined modules. These are the shared multiplexed polyphase filter and the Fast Fourier Transform (FFT). The demodulator consists of carrier, clock, and data recovery modules which are interactive. Progress on the design of the MultiCarrier Demodulator (MCD) using commercially available chips and Application Specific Integrated Circuits (ASIC) and simulation studies using Viewlogic software will be presented at the conference.

  20. A finite element conjugate gradient FFT method for scattering

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Zapp, John; Hsa, Chang-Yu; Volakis, John L.

    1990-01-01

    An extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation (FFT) is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. By virtue of the finite element method, the algorithm is applicable to structures of arbitrary material composition. Several improvements to the two dimensional algorithm are also described. These include: (1) modifications for terminating the mesh at circular boundaries without distorting the convolutionality of the boundary integrals; (2) the development of nonproprietary mesh generation routines for two dimensional applications; (3) the development of preprocessors for interfacing SDRC IDEAS with the main algorithm; and (4) the development of post-processing algorithms based on the public domain package GRAFIC to generate two and three dimensional gray level and color field maps.

  1. STS-33 EVA Prep and Post with Gregory, Blaha, Carter, Thorton, and Musgrave in FFT

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This video shows the crew in the airlock of the FFT, talking with technicians about the extravehicular activity (EVA) equipment. Thornton and Carter put on EVA suits and enter the airlock as the other crew members help with checklists.

  2. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  3. 1-FFT amino acids involved in high DP inulin accumulation in Viguiera discolor

    PubMed Central

    De Sadeleer, Emerik; Vergauwen, Rudy; Struyf, Tom; Le Roy, Katrien; Van den Ende, Wim

    2015-01-01

    Fructans are important vacuolar reserve carbohydrates with drought, cold, ROS and general abiotic stress mediating properties. They occur in 15% of all flowering plants and are believed to display health benefits as a prebiotic and dietary fiber. Fructans are synthesized by specific fructosyltransferases and classified based on the linkage type between fructosyl units. Inulins, one of these fructan types with β(2-1) linkages, are elongated by fructan:fructan 1-fructosyltransferases (1-FFT) using a fructosyl unit from a donor inulin to elongate the acceptor inulin molecule. The sequence identity of the 1-FFT of Viguiera discolor (Vd) and Helianthus tuberosus (Ht) is 91% although these enzymes produce distinct fructans. The Vd 1-FFT produces high degree of polymerization (DP) inulins by preferring the elongation of long chain inulins, in contrast to the Ht 1-FFT which prefers small molecules (DP3 or 4) as acceptor. Since higher DP inulins have interesting properties for industrial, food and medical applications, we report here on the influence of two amino acids on the high DP inulin production capacity of the Vd 1-FFT. Introducing the M19F and H308T mutations in the active site of the Vd 1-FFT greatly reduces its capacity to produce high DP inulin molecules. Both amino acids can be considered important to this capacity, although the double mutation had a much higher impact than the single mutations. PMID:26322058

  4. 1-FFT amino acids involved in high DP inulin accumulation in Viguiera discolor.

    PubMed

    De Sadeleer, Emerik; Vergauwen, Rudy; Struyf, Tom; Le Roy, Katrien; Van den Ende, Wim

    2015-01-01

    Fructans are important vacuolar reserve carbohydrates with drought, cold, ROS and general abiotic stress mediating properties. They occur in 15% of all flowering plants and are believed to display health benefits as a prebiotic and dietary fiber. Fructans are synthesized by specific fructosyltransferases and classified based on the linkage type between fructosyl units. Inulins, one of these fructan types with β(2-1) linkages, are elongated by fructan:fructan 1-fructosyltransferases (1-FFT) using a fructosyl unit from a donor inulin to elongate the acceptor inulin molecule. The sequence identity of the 1-FFT of Viguiera discolor (Vd) and Helianthus tuberosus (Ht) is 91% although these enzymes produce distinct fructans. The Vd 1-FFT produces high degree of polymerization (DP) inulins by preferring the elongation of long chain inulins, in contrast to the Ht 1-FFT which prefers small molecules (DP3 or 4) as acceptor. Since higher DP inulins have interesting properties for industrial, food and medical applications, we report here on the influence of two amino acids on the high DP inulin production capacity of the Vd 1-FFT. Introducing the M19F and H308T mutations in the active site of the Vd 1-FFT greatly reduces its capacity to produce high DP inulin molecules. Both amino acids can be considered important to this capacity, although the double mutation had a much higher impact than the single mutations. PMID:26322058

  5. 2D-FFT implementation on FPGA for wavefront phase recovery from the CAFADIS camera

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Magdaleno Castelló, E.; Domínguez Conde, C.; Rodríguez Valido, M.; Marichal-Hernández, J. G.

    2008-07-01

    The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic. GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation than common software packages running on top of CPUs. Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the viability of the FPGAs for AO in the ELTs.

  6. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  7. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    PubMed

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions. PMID:27128036

  8. Architecture as a Metaphor for Development of Culturally Adapted Minority Education Programs. Final Report of the Regional Study Award Project.

    ERIC Educational Resources Information Center

    Klein, Thomas W.

    The use of metaphor offers considerable promise within scientific disciplines for encouraging creative thinking, drawing attention to important concepts or principles in a discipline, and developing new methodologies. Ferguson's description of architectural practice as a metaphor for the development of social programs is used to demonstrate these…

  9. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  10. FFT applications to plane-polar near-field antenna measurements

    NASA Technical Reports Server (NTRS)

    Gatti, Mark S.; Rahmat-Samii, Yahya

    1988-01-01

    The four-point bivariate Lagrange interpolation algorithm was applied to near-field antenna data measured in a plane-polar facility. The results were sufficiently accurate to permit the use of the FFT (fast Fourier transform) algorithm to calculate the far-field patterns of the antenna. Good agreement was obtained between the far-field patterns as calculated by the Jacobi-Bessel and the FFT algorithms. The significant advantage in using the FFT is in the calculation of the principal plane cuts, which may be made very quickly. Also, the application of the FFT algorithm directly to the near-field data was used to perform surface holographic diagnosis of a reflector antenna. The effects due to the focusing of the emergent beam from the reflector, as well as the effects of the information in the wide-angle regions, are shown. The use of the plane-polar near-field antenna test range has therfore been expanded to include these useful FFT applications.

  11. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  12. An analysis of the double-precision floating-point FFT on FPGAs.

    SciTech Connect

    Hemmert, K. Scott; Underwood, Keith Douglas

    2005-01-01

    Advances in FPGA technology have led to dramatic improvements in double precision floating-point performance. Modern FPGAs boast several GigaFLOPs of raw computing power. Unfortunately, this computing power is distributed across 30 floating-point units with over 10 cycles of latency each. The user must find two orders of magnitude more parallelism than is typically exploited in a single microprocessor; thus, it is not clear that the computational power of FPGAs can be exploited across a wide range of algorithms. This paper explores three implementation alternatives for the fast Fourier transform (FFT) on FPGAs. The algorithms are compared in terms of sustained performance and memory requirements for various FFT sizes and FPGA sizes. The results indicate that FPGAs are competitive with microprocessors in terms of performance and that the 'correct' FFT implementation varies based on the size of the transform and the size of the FPGA.

  13. Pipelined digital SAR azimuth correlator using hybrid FFT-transversal filter

    NASA Technical Reports Server (NTRS)

    Wu, C.; Liu, K. Y. (Inventor)

    1984-01-01

    A synthetic aperture radar system (SAR) having a range correlator is provided with a hybrid azimuth correlator which utilizes a block-pipe-lined fast Fourier transform (FFT). The correlator has a predetermined FFT transform size with delay elements for delaying SAR range correlated data so as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. The azimuth correlator is comprised of a transversal filter to receive the SAR data in the frequency domain, a generator for range migration compensation and azimuth reference functions, and an azimuth reference multiplier for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT used to restore azimuth correlated data in the frequency domain to the time domain for imaging.

  14. Optical ranging and communication method based on all-phase FFT

    NASA Astrophysics Data System (ADS)

    Li, Zening; Chen, Gang

    2014-10-01

    This paper describes an optical ranging and communication method based on all-phase fast fourier transform (FFT). This kind of system is mainly designed for vehicle safety application. Particularly, the phase shift of the reflecting orthogonal frequency division multiplexing (OFDM) symbol is measured to determine the signal time of flight. Then the distance is calculated according to the time of flight. Several key factors affecting the phase measurement accuracy are studied. The all-phase FFT, which can reduce the effects of frequency offset, phase noise and the inter-carrier interference (ICI), is applied to measure the OFDM symbol phase shift.

  15. A FFT-based formulation for efficient mechanical fields computation in isotropic and anisotropic periodic discrete dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.

    2015-09-01

    In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.

  16. The Fun30 Chromatin Remodeler Fft3 Controls Nuclear Organization and Chromatin Structure of Insulators and Subtelomeres in Fission Yeast

    PubMed Central

    Khorosjutina, Olga; Persson, Jenna; Smialowska, Agata; Javerzat, Jean-Paul; Ekwall, Karl

    2015-01-01

    In eukaryotic cells, local chromatin structure and chromatin organization in the nucleus both influence transcriptional regulation. At the local level, the Fun30 chromatin remodeler Fft3 is essential for maintaining proper chromatin structure at centromeres and subtelomeres in fission yeast. Using genome-wide mapping and live cell imaging, we show that this role is linked to controlling nuclear organization of its targets. In fft3∆ cells, subtelomeres lose their association with the LEM domain protein Man1 at the nuclear periphery and move to the interior of the nucleus. Furthermore, genes in these domains are upregulated and active chromatin marks increase. Fft3 is also enriched at retrotransposon-derived long terminal repeat (LTR) elements and at tRNA genes. In cells lacking Fft3, these sites lose their peripheral positioning and show reduced nucleosome occupancy. We propose that Fft3 has a global role in mediating association between specific chromatin domains and the nuclear envelope. PMID:25798942

  17. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  18. Adaptive and Speculative Memory Consistency Support for Multi-core Architectures with On-Chip Local Memories

    NASA Astrophysics Data System (ADS)

    Vujic, Nikola; Alvarez, Lluc; Tallada, Marc Gonzalez; Martorell, Xavier; Ayguadé, Eduard

    Software cache has been showed as a robust approach in multi-core systems with no hardware support for transparent data transfers between local and global memories. Software cache provides the user with a transparent view of the memory architecture and considerably improves the programmability of such systems. But this software approach can suffer from poor performance due to considerable overheads related to software mechanisms to maintain the memory consistency. This paper presents a set of alternatives to smooth their impact. A specific write-back mechanism is introduced based on some degree of speculation regarding the number of threads actually modifying the same cache lines. A case study based on the Cell BE processor is described. Performance evaluation indicates that improvements due to the optimized software-cache structures combined with the proposed code-optimizations translate into 20% up to 40% speedup factors, compared to a traditional software cache approach.

  19. Mimicking the End Organ Architecture of Slowly Adapting Type I Afferents May Increase the Durability of Artificial Touch Sensors

    PubMed Central

    Lesniak, Daine R.; Gerling, Gregory J.

    2015-01-01

    In effort to mimic the sensitivity and efficient information transfer of natural tactile afferents, recent work has combined force transducers and computational models of mechanosensitive afferents. Sensor durability, another feature important to sensor design, might similarly capitalize upon biological rules. In particular, gains in sensor durability might leverage insight from the compound end organ of the slowly adapting type I afferent, especially its multiple sites of spike initiation that reset each other. This work develops models of compound spiking sensors using a computational network of transduction functions and leaky integrate and fire models (together a spike encoder, the software element of a compound spiking sensor), informed by the output of an existing force transducer (hardware sensing elements of a compound spiking sensor). Individual force transducer failures are simulated with and without resetting between spike encoders to test the importance of both resetting and configuration on system durability. The results indicate that the resetting of adjacent spike encoders, upon the firing of a spike by any one, is an essential mechanism to maintain a stable overall response in the midst of transducer failure. Furthermore, results suggest that when resetting is enabled, the durability of a compound sensor is maximized when individual transducers are paired with spike encoders and multiple, paired units are employed. To explore these ideas more fully, use cases examine the design of a compound sensor to either reach a target lifetime with a set probability or determine how often to schedule maintenance to control the probability of failure. PMID:25705703

  20. Billiard simulation and FFT analysis of AAS oscillations in nanofabricated InGaAs

    NASA Astrophysics Data System (ADS)

    Koga, Takaaki; Faniel, Sebastien; Mineshige, Shunsuke; Matsuura, Toru; Sekine, Yoshiaki

    2010-03-01

    Gate-voltage-dependent amplitude of magneto-conductance oscillation was analyzed using FFT method. The obtained FFT spectrum was compared with the areal dependence of the occurrence and spin interferece amplitude, calculated for Altshuler-Aronov-Spivak (AAS) type time-reversal pairs of the interference paths on all possible classical trajectroies that were obtained by extensive billiard simulations within the given structures. We have calcuated generic spin interference (SI) curves as a function of the Rashba parameter α, for various values of the Dresselhaus parameter b41^6c6c [eVå^3]. The comparison between theory and experiment suggested that the value of b41^6c6c should be considerably reduced from 27 eVå^3, the generally known value from the k.p theory.

  1. An interference monitor with real-time FFT spectral analysis for a radio observatory

    NASA Astrophysics Data System (ADS)

    Romalo, David N.; Dewdney, Peter E.; Landecker, Thomas L.; Ito, Mabo R.

    1989-08-01

    A system is described which uses a real-time FFT spectrum analyzer to monitor radio interference near 408 MHz occurring at a radio observatory. Direction of arrival, frequency, intensity, and time of occurrence are recorded under the control of a microcomputer. A sensitive receiver can be connected to any one of eight directional antennas to establish direction of arrival. The receiver output is digitized to 8 bits and analyzed by the FFT spectrum analyzer which has a real-time bandwidth of 0.5 MHz. A total bandwidth of 20 MHz is analyzed in segments of 0.4 MHz. The analyzer uses the modified periodogram method developed by Welch (1967), and a Kaiser-Bessel windowing function is applied to ensure low sidelobes. Dynamic range is 40 dB, and the interference monitor obtains high sensitivity to very weak interfering signals by time averaging.

  2. Non-uniform MR image reconstruction based on non-uniform FFT

    NASA Astrophysics Data System (ADS)

    Liang, Xiao-yun; Zeng, Wei-ming; Dong, Zhi-hua; Zhang, Zhi-jiang; Luo, Li-min

    2007-01-01

    A Non-Uniform Fast Fourier Transform (NUFFT) based method for non-Cartesian k-space data reconstruction is presented. For Cartesian K-space data, as we all know, image can be reconstructed using 2DFFT directly. But, as far as know, this method has not been universally accepted nowadays because of its inevitable disadvantages. On the contrary, non-Cartesian method is of the advantage over it, so we focused on the method usually. The most straightforward approach for the reconstruction of non-Cartesian data is directly via a Fourier summation. However, the computational complexity of the direct method is usually much greater than an approach that uses the efficient FFT. But the FFT requires that data be sampled on a uniform Cartesian grid in K-space, and a NUFFT based method is of much importance. Finally, experimental results which are compared with existing method are given.

  3. Perspectives in magnetic resonance: NMR in the post-FFT era

    NASA Astrophysics Data System (ADS)

    Hyberts, Sven G.; Arthanari, Haribabu; Robson, Scott A.; Wagner, Gerhard

    2014-04-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-uniform sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR.

  4. Perspectives in magnetic resonance: NMR in the post-FFT era.

    PubMed

    Hyberts, Sven G; Arthanari, Haribabu; Robson, Scott A; Wagner, Gerhard

    2014-04-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-uniform sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR. PMID:24656081

  5. Fitting FFT-derived spectra: Theory, tool, and application to solar radio spike decomposition

    SciTech Connect

    Nita, Gelu M.; Fleishman, Gregory D.; Gary, Dale E.; Marin, William; Boone, Kristine

    2014-07-10

    Spectra derived from fast Fourier transform (FFT) analysis of time-domain data intrinsically contain statistical fluctuations whose distribution depends on the number of accumulated spectra contributing to a measurement. The tail of this distribution, which is essential for separating the true signal from the statistical fluctuations, deviates noticeably from the normal distribution for a finite number of accumulations. In this paper, we develop a theory to properly account for the statistical fluctuations when fitting a model to a given accumulated spectrum. The method is implemented in software for the purpose of automatically fitting a large body of such FFT-derived spectra. We apply this tool to analyze a portion of a dense cluster of spikes recorded by our FASR Subsystem Testbed instrument during a record-breaking event that occurred on 2006 December 6. The outcome of this analysis is briefly discussed.

  6. Perspectives in Magnetic Resonance: NMR in the Post-FFT Era

    PubMed Central

    Hyberts, Sven G.; Arthanari, Haribabu; Robson, Scott A.; Wagner, Gerhard

    2014-01-01

    Multi-dimensional NMR spectra have traditionally been processed with the fast Fourier transformation (FFT). The availability of high field instruments, the complexity of spectra of large proteins, the narrow signal dispersion of some unstructured proteins, and the time needed to record the necessary increments in the indirect dimensions to exploit the resolution of the highfield instruments make this traditional approach unsatisfactory. New procedures need to be developed beyond uniform sampling of the indirect dimensions and reconstruction methods other than the straight FFT are necessary. Here we discuss approaches of non-unifom sampling (NUS) and suitable reconstruction methods. We expect that such methods will become standard for multi-dimensional NMR data acquisition with complex biological macromolecules and will dramatically enhance the power of modern biological NMR. PMID:24656081

  7. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji

    2016-03-01

    Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer.

  8. Isotropic Spin Trap EPR Spectra Simulation by Fast Fourier Transform (FFT)

    NASA Astrophysics Data System (ADS)

    Laachir, S.; Moussetad, M.; Adhiri, R.; Fahli, A.

    2005-03-01

    The detection and investigation of free radicals forming in living systems became possible due to the introduction of the method of spin traps. In this work, the electron spin resonance (ESR) spectra of DMPO/HO(.) and MGD-Fe-NO adducts are reproduced by simulation, based on the Fast Fourier Transform (FFT). The calculated spectral parameters as the hyperfine coupling constants, agree reasonably with the experimental data and the results are discussed.

  9. FPGA-based design of FFT processor and optimization of window-adding

    NASA Astrophysics Data System (ADS)

    Kai, Pan; Song, Jie; Zhong, Qing

    2015-12-01

    A method of implement FFT based on FPGA IP Core is introduced in this paper. In addition, for the spectrum leakage caused by the truncation of the non-integer-period sampling, an improved method of adding window to the input signal to restrain the spectrum leakage is proposed. The design was simulated in the Matlab environment. The results show that the proposed method has good performance with some improvement.

  10. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  11. Application to induction motor faults diagnosis of the amplitude recovery method combined with FFT

    NASA Astrophysics Data System (ADS)

    Liu, Yukun; Guo, Liwei; Wang, Qixiang; An, Guoqing; Guo, Ming; Lian, Hao

    2010-11-01

    This paper presents a signal processing method - amplitude recovery method (abbreviated to ARM) - that can be used as the signal pre-processing for fast Fourier transform (FFT) in order to analyze the spectrum of the other-order harmonics rather than the fundamental frequency in stator currents and diagnose subtle faults in induction motors. In this situation, the ARM functions as a filter that can filter out the component of the fundamental frequency from three phases of stator currents of the induction motor. The filtering result of the ARM can be provided to FFT to do further spectrum analysis. In this way, the amplitudes of other-order frequencies can be extracted and analyzed independently. If the FFT is used without the ARM pre-processing and the components of other-order frequencies, compared to the fundamental frequency, are fainter, the amplitudes of other-order frequencies are not able easily to extract out from stator currents. The reason is when the FFT is used direct to analyze the original signal, all the frequencies in the spectrum analysis of original stator current signal have the same weight. The ARM is capable of separating the other-order part in stator currents from the fundamental-order part. Compared to the existent digital filters, the ARM has the benefits, including its stop-band narrow enough just to stop the fundamental frequency, its simple operations of algebra and trigonometry without any integration, and its deduction direct from mathematics equations without any artificial adjustment. The ARM can be also used by itself as a coarse-grained diagnosis of faults in induction motors when they are working. These features can be applied to monitor and diagnose the subtle faults in induction motors to guard them from some damages when they are in operation. The diagnosis application of ARM combined with FFT is also displayed in this paper with the experimented induction motor. The test results verify the rationality and feasibility of the

  12. High scalable implementation of SPME using parallel spherical cutoff three-dimensional FFT on the six-dimensional torus QCDOC supercomputer

    NASA Astrophysics Data System (ADS)

    Fang, Bin

    In order to model complex heterogeneous biophysical systems with non-trivial charge distributions such as globular proteins in water, it is important to evaluate the long range forces present in these systems accurately and efficiently. The Smooth Particle Mesh Ewald summation technique (SPME) is commonly employed to determine the long range part of electrostatic energy in large scale molecular simulations. While the SPME technique does not give rise to a performance bottleneck in a single processor or scalar computation, current implementations of SPME on massively parallel supercomputers become problematic at large processor numbers, limiting the time and length scales that can be reached. Here, two accomplishments have been made in this dissertation to give rise to both improved accuracy and efficiency on massively parallel computing platforms. First of all, a well designed parallel framework of 3D complex-to-complex FFT and 3D real-to-complex FFT for the novel QCDOC supercomputer with its 6D-torus architecture is given. The efficiency of this framework was tested on up to 4096 processors. Secondly, a new modification of the SPME technique is exploited, which was inspired by the non-linear growth of the approximation error of Euler Exponential Spline interpolation function. This fine grained parallel implementation of SPME has been embedded into MDoC package. Numerical tests of package performance on up to 1024-processor QCDOC supercomputer residing at Brookhaven National Lab are presented for two systems of interest, beta-hairpin solvated in explicit water, a system which consists of 1112 water molecules and a 20 residue protein for a total of 3579 atoms, and HIV-1 protease solvated in explicit water, a system which consists of 8793 water molecules and a 198 residue protein for a total of 29508 atoms.

  13. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  14. Terrain-Adaptive Navigation Architecture

    NASA Technical Reports Server (NTRS)

    Helmick, Daniel M.; Angelova, Anelia; Matthies, Larry H.; Helmick, Daniel M.

    2008-01-01

    A navigation system designed for a Mars rover has been designed to deal with rough terrain and/or potential slip when evaluating and executing paths. The system also can be used for any off-road, autonomous vehicles. The system enables vehicles to autonomously navigate different terrain challenges including dry river channel systems, putative shorelines, and gullies emanating from canyon walls. Several of the technologies within this innovation increase the navigation system s capabilities compared to earlier rover navigation algorithms.

  15. AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS

    PubMed Central

    Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart

    2009-01-01

    The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233

  16. Non-uniform FFT for the finite element computation of the micromagnetic scalar potential

    NASA Astrophysics Data System (ADS)

    Exl, L.; Schrefl, T.

    2014-08-01

    We present a quasi-linearly scaling, first order polynomial finite element method for the solution of the magnetostatic open boundary problem by splitting the magnetic scalar potential. The potential is determined by solving a Dirichlet problem and evaluation of the single layer potential by a fast approximation technique based on Fourier approximation of the kernel function. The latter approximation leads to a generalization of the well-known convolution theorem used in finite difference methods. We address it by a non-uniform FFT approach. Overall, our method scales O(M+N+Nlog N) for N nodes and M surface triangles. We confirm our approach by several numerical tests.

  17. Analysis of fixed point FFT for Fourier domain optical coherence tomography systems.

    PubMed

    Ali, Murtaza; Parlapalli, Renuka; Magee, David P; Dasgupta, Udayan

    2009-01-01

    Optical coherence tomography (OCT) is a new imaging modality gaining popularity in the medical community. Its application includes ophthalmology, gastroenterology, dermatology etc. As the use of OCT increases, the need for portable, low power devices also increases. Digital signal processors (DSP) are well suited to meet the signal processing requirements of such a system. These processors usually operate on fixed precision. This paper analyzes the issues that a system implementer faces implementing signal processing algorithms on fixed point processor. Specifically, we show the effect of different fixed point precisions in the implementation of FFT on the sensitivity of Fourier domain OCT systems. PMID:19965018

  18. Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.

    PubMed

    Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju

    2016-01-01

    Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method. PMID:26626623

  19. High-performance FFT implementation on the BOPS ManArray parallel DSP

    NASA Astrophysics Data System (ADS)

    Pitsianis, Nikos P.; Pechanek, Gerald

    1999-11-01

    We present a high performance implementation of the FFT algorithm on the BOPS ManArray parallel DSP processor. The ManArray we consider for this application consists of an array controller and 2 to 4 fully interconnected processing elements. To expose the parallelism inherent to an FFT algorithm we use a factorization of the DFT matrix in Kronecker products, permutation and diagonal matrices. Our implementation utilizes the multiple levels of parallelism that are available on the ManArray. We use the special multiply complex instruction, that calculates the product of two complex 32-bit fixed point numbers in 2 cycles (pipelinable). Instruction level parallelism is exploited via the indirect Very Long Instruction Word (iVLIW). With an iVLIW, in the same cycle a complex number is read from memory, another complex number is written to memory, a complex multiplication starts and another finishes, two complex additions or subtractions are done and a complex number is exchanged with another processing element. Multiple local FFTs are executed in Single Instruction Multiple Data (SIMD) mode, and to avoid a costly data transposition we execute distributed FFTs in Synchronous Multiple Instructions Multiple Data (SMIMD) mode.

  20. Robust FFT-based scale-invariant image registration with image gradients.

    PubMed

    Tzimiropoulos, Georgios; Argyriou, Vasileios; Zafeiriou, Stefanos; Stathaki, Tania

    2010-10-01

    We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6. PMID:20479492

  1. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  2. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  3. FFT-split-operator code for solving the Dirac equation in 2+1 dimensions

    NASA Astrophysics Data System (ADS)

    Mocken, Guido R.; Keitel, Christoph H.

    2008-06-01

    provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 474 937 No. of bytes in distributed program, including test data, etc.: 4 128 347 Distribution format: tar.gz Programming language: C++ Computer: Any, but SMP systems are preferred Operating system: Linux and MacOS X are actively supported by the current version. Earlier versions were also tested successfully on IRIX and AIX Number of processors used: Generally unlimited, but best scaling with 2-4 processors for typical problems RAM: 160 Megabytes minimum for the examples given here Classification: 2.7 External routines: FFTW Library [3,4], Gnu Scientific Library [5], bzip2, bunzip2 Nature of problem: The relativistic time evolution of wave functions according to the Dirac equation is a challenging numerical task. Especially for an electron in the presence of high intensity laser beams and/or highly charged ions, this type of problem is of considerable interest to atomic physicists. Solution method: The code employs the split-operator method [1,2], combined with fast Fourier transforms (FFT) for calculating any occurring spatial derivatives, to solve the given problem. An autocorrelation spectral method [6] is provided to generate a bound state for use as the initial wave function of further dynamical studies. Restrictions: The code in its current form is restricted to problems in two spatial dimensions. Otherwise it is only limited by CPU time and memory that one can afford to spend on a particular problem. Unusual features: The code features dynamically adapting position and momentum space grids to keep execution time and memory requirements as small as possible. It employs an object-oriented approach, and it relies on a Clifford algebra class library to represent the mathematical objects of the Dirac formalism which we employ. Besides that it includes a feature (typically called "checkpointing") which allows the resumption of an

  4. A comparison of direction finding results from an FFT peak identification technique with those from the music algorithm

    NASA Astrophysics Data System (ADS)

    Montbriand, L. E.

    1991-07-01

    A peak identification technique which uses the fast Fourier transform (FFT) algorithm is presented for unambiguously identifying up to three sources in signals received by the sampled aperture receiving array (SARA) of the Communications Research Center. The technique involves removing phase rotations resulting from the FFT and the data configuration and interpreting this result as the direction cosine distribution of the received signal. The locations and amplitudes of all peaks for one array arm are matched with those in a master list for a single source in order to identify actual sources. The identification of actual sources was found to be subject to the limitations of the FFT in that there was an inherent bias for the secondary and tertiary sources to appear at the side-lobe positions of the strongest source. There appears to be a limit in the ratio of the magnitude of a weaker source to that of the strongest source, below which it becomes too difficult to reliably identify true sources. For the SARA array this ratio is near-10 dB. Some of the data were also analyzed using the more complex MUSIC algorithm which yields a narrower directional peak for the sources than the FFT. For the SARA array, using ungroomed data, the largest side and grating lobes that the MUSIC algorithm produces are some 10 dB below the largest side and grating lobes that are produced using the FFT algorithm. Consequently the source-separation problem is less than that encountered using the FFT algorithm, but is not eliminated.

  5. FFT analysis on NDVI annual cycle and climatic regionality in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Negrón Juárez, Robinson I.; Liu, William T.

    2001-11-01

    By considering that the climate of Northeast Brazil (NEB) has distinct wet and dry seasons, the mixed radix fast Fourier transform (mrFFT) algorithm, developed at the National Aerospace Centre of the Netherlands, was applied to a monthly Normalized Difference Vegetation Index (NDVI) time series from July 1981 to June 1993, to generate phase, amplitude and mean NDVI data using a 1-year frequency in order to improve the analysis of its spatial variation.The NDVI mean values varied from >0.7, which occurred in northwest and southeast regions, to <0.3 in the northeast and 0.4 in the southwest regions of the NEB. The 90° phase month at its maximum amplitude occurred in August and was observed in both southeastern and northwestern coasts, located at 10.5°S-37.5°W and 4°S-46°W, respectively. It changed rapidly from August, June to May, moved inland and changed gradually from May through to April and from March to February, then moved towards the centre Dry Polygon area. Then it changed gradually from February to January and ended up in December, and moved further southwards. The annual cycle amplitude varied from <0.075 in northwest and southeast regions to >0.25 in the northeast region. By using spatial variations of phase, amplitude and mean NDVI values, 15 climate types were delineated for the NEB.The spatial distribution of climate types in the NEB delineated by the NDVI FFT analysis agreed mostly with the climatic types presented by Hargreaves (Precipitation dependability and potentials for agricultural production in Northeast Brazil. Empresa Brasileira de Pesquisa Agropecuária-EMBRAPA, Brazil, 1974), except regions with higher spatial variability and limited surface meteorological data. Among the three components: phase, amplitude and mean NDVI, the phase image, informing the initiation and duration of rainy season, was the most important component for climate-type delineation. Nevertheless, while the extreme values of amplitude, inferring a high wet

  6. STS-26 Commander Hauck during egress training in JSC's MAIL Bldg 9A FFT

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, Commander Frederick H. Hauck, wearing launch and entry suit (LES) and launch and entry helmet (LEH), egresses the Full Fuselage Trainer (FFT) via the new crew escape system (CES) slide inflated at the open side hatch. Technicians stand on either side of the slide ready to help Hauck to his feet when he reaches the bottom. The emergency egress training was held in JSC's Shuttle Mockup and Integration Laboratory (MAIL) Bldg 9A. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (LESs) and checked out the crew escape system (CES) slide and other CES configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  7. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Astronaut Steven R. Nagel (left), positioned in the open side hatch, briefs the crew on the pole escape system as he demonstrates some related equipment. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options. The photograph was taken by Keith Meyers of the NEW YORK TIMES.

  8. STS-26 crew trains in JSC full fuselage trainer (FFT) shuttle mockup

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 Discovery, Orbiter Vehicle (OV) 103, crewmembers are briefed during a training exercise in the Shuttle Mockup and Integration Laboratory Bldg 9A. Seated outside the open side hatch of the full fuselage trainer (FFT) (left to right) are Mission Specialist (MS) George D. Nelson, Commander Frederick H. Hauck, and Pilot Richard O. Covey. Looking on at right are Astronaut Office Chief Daniel C. Brandenstein (standing) and astronaut James P. Bagian. During Crew Station Review (CSR) #3, the crew donned the new (navy blue) partial pressure suits (launch and entry suits (LESs)) and checked out crew escape system (CES) configurations to evaluate crew equipment and procedures related to emergency egress methods and proposed crew escape options.

  9. Mixed boundary conditions for FFT-based homogenization at finite strains

    NASA Astrophysics Data System (ADS)

    Kabel, Matthias; Fliegener, Sascha; Schneider, Matti

    2016-02-01

    In this article we introduce a Lippmann-Schwinger formulation for the unit cell problem of periodic homogenization of elasticity at finite strains incorporating arbitrary mixed boundary conditions. Such problems occur frequently, for instance when validating computational results with tensile tests, where the deformation gradient in loading direction is fixed, as is the stress in the corresponding orthogonal plane. Previous Lippmann-Schwinger formulations involving mixed boundary can only describe tensile tests where the vector of applied force is proportional to a coordinate direction. Utilizing suitable orthogonal projectors we develop a Lippmann-Schwinger framework for arbitrary mixed boundary conditions. The resulting fixed point and Newton-Krylov algorithms preserve the positive characteristics of existing FFT-algorithms. We demonstrate the power of the proposed methods with a series of numerical examples, including continuous fiber reinforced laminates and a complex nonwoven structure of a long fiber reinforced thermoplastic, resulting in a speed-up of some computations by a factor of 1000.

  10. STS-29 MS Bagian during post landing egress exercises in JSC FFT mockup

    NASA Technical Reports Server (NTRS)

    1989-01-01

    STS-29 Discovery, Orbiter Vehicle (OV) 103, Mission Specialist (MS) James P. Bagian works his way down to 'safety' using a sky-genie device during post landing emergency egress exercises in JSC full fuselage trainer (FFT) located in the Mockup and Integration Laboratory Bldg 9A. Bagian, wearing orange launch and entry suit (LES) and launch and entry helmet (LEH), lowers himself using the sky genie after egressing from crew compartment overhead window W8. Fellow crewmembers and technicians watch Bagian's progress. Standing in navy blue LES is MS Robert C. Springer with MS James F. Buchli seated behind him on his right and Pilot John E. Blaha seated behind him on his left. Bagian is one of several astronauts who has been instrumental in developing the new crew escape system (CES) equipment (including parachute harness).

  11. Implementation of non-uniform FFT based Ewald summation in dissipative particle dynamics method

    NASA Astrophysics Data System (ADS)

    Wang, Yong-Lei; Laaksonen, Aatto; Lu, Zhong-Yuan

    2013-02-01

    The ENUF method, i.e., Ewald summation based on the non-uniform FFT technique (NFFT), is implemented in dissipative particle dynamics (DPD) simulation scheme to fast and accurately calculate the electrostatic interactions at mesoscopic level. In a simple model electrolyte system, the suitable ENUF-DPD parameters, including the convergence parameter α, the NFFT approximation parameter p, and the cut-offs for real and reciprocal space contributions, are carefully determined. With these optimized parameters, the ENUF-DPD method shows excellent efficiency and scales as O(NlogN). The ENUF-DPD method is further validated by investigating the effects of charge fraction of polyelectrolyte, ionic strength and counterion valency of added salts on polyelectrolyte conformations. The simulations in this paper, together with a separately published work of dendrimer-membrane complexes, show that the ENUF-DPD method is very robust and can be used to study charged complex systems at mesoscopic level.

  12. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  13. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. PMID:24038140

  14. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  15. Analog implementation of radix-2, 16-FFT processor for OFDM receivers: non-linearity behaviours and system performance analysis

    NASA Astrophysics Data System (ADS)

    Mokhtarian, N.; Hodtani, G. A.

    2015-12-01

    Analog implementations of decoders have been widely studied by considering circuit complexity, as well as power and speed, and their integration with other analog blocks is an extension of analog decoding research. In the front-end blocks of orthogonal frequency-division multiplexing (OFDM) systems, combination of an analog fast Fourier transform (FFT) with an analog decoder is suitable. In this article, the implementation of a 16-symbol FFT processor based on analog complementary metal-oxide-semiconductor current mirrors within circuit and system levels is presented, and the FFT is implemented using a butterfly diagram, where each node is implemented using analog circuits. Implementation details include consideration of effects of transistor mismatch and inherent noises and effects of circuit non-linearity in OFDM system performance. It is shown that not only can transistor inherent noises be measured but also transistor mismatch can be applied as an input-referred noise source that can be used in system- and circuit-level studies. Simulations of a radix-2, 16-symbol FFT show that proposed circuits consume very low power, and impacts of noise, mismatch and non-linearity for each node of this processor are very small.

  16. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  17. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  18. An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients

    NASA Astrophysics Data System (ADS)

    Moser, Steven; Lee, Peter; Podoleanu, Adrian

    2015-04-01

    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.

  19. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  20. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  1. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  2. Detection of apnea using a short-window FFT technique and an artificial neural network

    NASA Astrophysics Data System (ADS)

    Waldemark, Karina E.; Agehed, Kenneth I.; Lindblad, Thomas; Waldemark, Joakim T. A.

    1998-03-01

    Sleep apnea is characterized by frequent prolonged interruptions of breathing during sleep. This syndrome causes severe sleep disorders and is often responsible for development of other diseases such as heart problems, high blood pressure and daytime fatigue, etc. After diagnosis, sleep apnea is often successfully treated by applying positive air pressure (CPAP) to the mouth and nose. Although effective, the (CPAP) equipment takes up a lot of space and the connected mask causes a lot of inconvenience for the patients. This raised interest in developing new techniques for treatment of sleep apnea syndrome. Several studies have indicated that electrical stimulation of the hypoglossal nerve and muscle in the tongue may be a useful method for treating patients with severe sleep apnea. In order to be able to successfully prevent the occurrence of apnea it is necessary to have some technique for early and fast on-line detection or prediction of the apnea events. This paper suggests using measurements of respiratory airflow (mouth temperature). The signal processing for this task includes the use of a short window FFT technique and uses an artificial back propagation neural net to model or predict the occurrence of apneas. The results show that early detection of respiratory interruption is possible and that the delay time for this is small.

  3. Ambient modal identification of a primary-secondary structure by Fast Bayesian FFT method

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui; Zhang, Feng-Liang

    2012-04-01

    The Mong Man Wai Building is a seven-storied reinforced concrete structure situated on the campus of the City University of Hong Kong. On its roof a two-storied steel frame has been recently constructed to host a new wind tunnel laboratory. The roof frame and the main building form a primary-secondary structure. The dynamic characteristics of the resulting system are of interest from a structural dynamics point of view. This paper presents work on modal identification of the structure using ambient vibration measurement. An array of tri-axial acceleration data has been obtained using a number of setups to cover all locations of interest with a limited number of sensors. Modal identification is performed using a recently developed Fast Bayesian FFT method. In addition to the most probable modal properties, their posterior uncertainties can also be assessed using the method. The posterior uncertainty of mode shape is assessed by the expected value of the Modal Assurance Criteria (MAC) of the most probable mode shape with a random mode shape consistent with the posterior distribution. The mode shapes of the overall structural system are obtained by assembling those from individual setups using a recently developed least-square method. The identification results reveal a number of interesting features about the structural system and provide important information defining the baseline modal properties of the building. Practical interpretation of the statistics of modal parameters calculated from frequentist and Bayesian context is also discussed.

  4. A precorrected-FFT method to accelerate the solution of the forward problem in magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Tissari, Satu; Rahola, Jussi

    2003-02-01

    Accurate localization of brain activity recorded by magnetoencephalography (MEG) requires that the forward problem, i.e. the magnetic field caused by a dipolar source current in a homogeneous volume conductor, be solved precisely. We have used the Galerkin method with piecewise linear basis functions in the boundary element method to improve the solution of the forward problem. In addition, we have replaced the direct method, i.e. the LU decomposition, by a modern iterative method to solve the dense linear system of equations arising from the boundary element discretization. In this paper we describe a precorrected-FFT method which we have combined with the iterative method to accelerate the solution of the forward problem and to avoid the explicit formation of the dense coefficient matrix. For example, with a triangular mesh of 18000 triangles, the CPU time to solve the forward problem was decreased from 3.5 h to less than 5 min, and the computer memory requirements were decreased from 1.3 GB to 156 MB. The method makes it possible to solve quickly significantly larger problems with widely-used workstations.

  5. The use of the FFT for the efficient solution of the problem of electromagnetic scattering by a body of revolution

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Mittra, Raj

    1990-01-01

    The enhancement of the computational efficiency of the body of revolution (BOR) scattering problem is discused with a view to making it practical for solving large-body problems. The problem of EM scattering by a perfectly conducting BOR is considered, although the methods can be extended to multilayered dielectric bodies as well. Typically, the generation of the elements of the moment method matrix consumes a major portion of the computational time. It is shown how this time can be significantly reduced by manipulating the expression for the matrix elements to permit efficient FFT computation. A technique for extracting the singularity of the Green function that appears within the integrands of the matrix diagonal is also presented, further enhancing the usefulness of the FFT. The computation time can thus be improved by at least an order of magnitude for large bodies in comparison to that for previous algorithms.

  6. Textural analyses of carbon fiber materials by 2D-FFT of complex images obtained by high frequency eddy current imaging (HF-ECI)

    NASA Astrophysics Data System (ADS)

    Schulze, Martin H.; Heuer, Henning

    2012-04-01

    Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.

  7. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Fujisawa, A.; Nagashima, Y.; Hamasaki, M.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; QUEST Team

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis.

  8. Improving situation awareness using a hub architecture for friendly force tracking

    NASA Astrophysics Data System (ADS)

    Karkkainen, Anssi P.

    2010-04-01

    Situation Awareness (SA) is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their future status. In a military environment the most critical elements to be tracked are followed elements are either friendly or hostile forces. Poor knowledge of locations of friendly forces easily leads into the situation in which the troops could be under firing by own troops or in which decisions in a command and control system are based on incorrect tracking. Thus the Friendly Force Tracking (FFT) is a vital part of building situation awareness. FFT is basically quite simple in theory; collected tracks are shared through the networks to all troops. In real world, the situation is not so clear. Poor communication capabilities, lack of continuous connectivity n and large number of user on different level provide high requirements for FFT systems. In this paper a simple architecture for Friendly Force Tracking is presented. The architecture is based on NFFI (NATO Friendly Force Information) hubs which have two key features; an ability to forward tracking information and an ability to convert information into the desired format. The hub based approach provides a lightweight and scalable solution, which is able to use several types of communication media (GSM, tactical radios, TETRA etc.). The system is also simple to configure and maintain. One main benefit of the proposed architecture is that it is independent on a message format. It communicates using NFFI messages, but national formats are also allowed.

  9. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  10. Geometric super-resolution via log-polar FFT image registration and variable pixel linear reconstruction

    NASA Astrophysics Data System (ADS)

    Crabtree, Peter N.; Murray-Krezan, Jeremy

    2011-09-01

    Various image de-aliasing techniques and algorithms have been developed to improve the resolution of pixel-limited imagery acquired by an optical system having an undersampled point spread function. These techniques are sometimes referred to as multi-frame or geometric super-resolution, and are valuable tools because they maximize the imaging utility of current and legacy focal plane array (FPA) technology. This is especially true for infrared FPAs which tend to have larger pixels as compared to visible sensors. Geometric super-resolution relies on knowledge of subpixel frame-toframe motion, which is used to assemble a set of low-resolution frames into one or more high-resolution (HR) frames. Log-polar FFT image registration provides a straightforward and relatively fast approach to estimate global affine motion, including translation, rotation, and uniform scale changes. This technique is also readily extended to provide subpixel translation estimates, and is explored for its potential combination with variable pixel linear reconstruction (VPLR) to apportion a sequence of LR frames onto a HR grid. The VPLR algorithm created for this work is described, and HR image reconstruction is demonstrated using calibrated 1/4 pixel microscan data. The HR image resulting from VPLR is also enhanced using Lucy-Richardson deconvolution to mitigate blurring effects due to the pixel spread function. To address non-stationary scenes, image warping, and variable lighting conditions, optical flow is also investigated for its potential to provide subpixel motion information. Initial results demonstrate that the particular optical flow technique studied is able to estimate shifts down to nearly 1/10th of a pixel, and possibly smaller. Algorithm performance is demonstrated and explored using laboratory data from visible cameras.

  11. A New Blind Adaptive Array Antenna Based on CMA Criteria for M-Ary/SS Signals Suitable for Software Defined Radio Architecture

    NASA Astrophysics Data System (ADS)

    Kozuma, Miho; Sasaki, Atsushi; Kamiya, Yukihiro; Fujii, Takeo; Umebayashi, Kenta; Suzuki, Yasuo

    M-ary/SS is a version of Direct Sequence/Spread Spectrum (DS/SS) aiming to improve the spectral efficiency employing orthogonal codes. However, due to the auto-correlation property of the orthogonal codes, it is impossible to detect the symbol timing by observing correlator outputs. Therefore, conventionally, a preamble has been inserted in M-ary/SS, signals. In this paper, we propose a new blind adaptive array antenna for M-ary/SS systems that combines signals over the space axis without any preambles. It is surely an innovative approach for M-ary/SS. The performance is investigated through computer simulations.

  12. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  13. PICNIC Architecture.

    PubMed

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source. PMID:16160218

  14. Architectural principles for the design of wide band image analysis systems

    SciTech Connect

    Bruning, U.; Giloi, W.K.; Liedtke, C.E.

    1983-01-01

    To match an image-analysis system appropriately to the multistage nature of image analysis, the system should have: (1) an overall system architecture made up of several dedicated SIMD coprocessors connected through a bottleneck-free, high-speed communication structure; (2) data-structure types in hardware; and (3) a conventional computer for executing operating-system functions and application programs. Coprocessors may exist specifically for local image processing, FFT, list processing, and vector processing in general. All functions must be transparent to the user. The architectural principles of such a system and the policies and mechanisms for its realization are exemplified. 4 references.

  15. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  16. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  17. Space Telecommunications Radio Architecture (STRS)

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  18. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  19. IAIMS architecture.

    PubMed

    Hripcsak, G

    1997-01-01

    An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  20. An Architecture to Enable Future Sensor Webs

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Caffrey, Robert; Frye, Stu; Grosvenor, Sandra; Hess, Melissa; Chien, Steve; Sherwood, Rob; Davies, Ashley; Hayden, Sandra; Sweet, Adam

    2004-01-01

    A sensor web is a coherent set of distributed 'nodes', interconnected by a communications fabric, that collectively behave as a single dynamic observing system. A 'plug and play' mission architecture enables progressive mission autonomy and rapid assembly and thereby enables sensor webs. This viewgraph presentation addresses: Target mission messaging architecture; Strategy to establish architecture; Progressive autonomy with onboard sensor web; EO-1; Adaptive array antennas (smart antennas) for satellite ground stations.

  1. Reference architecture for space data systems

    NASA Technical Reports Server (NTRS)

    Shames, P.; Yamada, T.

    2003-01-01

    Architectures for terrestrial data systems that are built and managed by a single organization are inherently complex. In order to understand any large-scale system architecture, and to judge its applicability for its nominal task, a description of the system must be produced that exposes a number of distinct viewpoints. Within the CCSDS Architecture Working Group we have adapted the Reference Model for Open Distributed Processing to describe large, multi-national, space data systems.

  2. Complexity and Performance Results for Non FFT-Based Univariate Polynomial Multiplication

    NASA Astrophysics Data System (ADS)

    Chowdhury, Muhammad F. I.; Maza, Marc Moreno; Pan, Wei; Schost, Eric

    2011-11-01

    Today's parallel hardware architectures and computer memory hierarchies enforce revisiting fundamental algorithms which were often designed with algebraic complexity as the main complexity measure and with sequential running time as the main performance counter. This study is devoted to two algorithms of univariate polynomial multiplication; that are independent of the coefficient ring: the plain and the Toom-Cook univariate multiplications. We analyze their cache complexity and report on their parallel implementations in Cilk++ [1].

  3. Architectural Illusion.

    ERIC Educational Resources Information Center

    Doornek, Richard R.

    1990-01-01

    Presents a lesson plan developed around the work of architectural muralist Richard Haas. Discusses the significance of mural painting and gives key concepts for the lesson. Lists class activities for the elementary and secondary grades. Provides a photograph of the Haas mural on the Fountainbleau Hilton Hotel, 1986. (GG)

  4. Architectural Treasures.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Presents an art lesson for eighth-grade students in which they created their own architectural structures. Stresses a strong discipline-based introduction using slide shows of famous buildings, large metropolitan cities, and 35,00 years of homes. Reports the lesson spanned two weeks. Includes a diagram, directions, and specifies materials. (CMK)

  5. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  6. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  7. Energy Conservation through Architectural Design

    ERIC Educational Resources Information Center

    Thomson, Robert C., Jr.

    1977-01-01

    Describes a teaching unit designed to create in students an awareness of and an appreciation for the possibilities for energy conservation as they relate to architecture. It is noted that the unit can be adapted for use in many industrial programs and with different teaching methods due to the variety of activities that can be used. (Editor/TA)

  8. Shaping plant architecture.

    PubMed

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  9. Shaping plant architecture

    PubMed Central

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  10. A new solver for the elastic normal contact problem using conjugate gradients, deflation, and an FFT-based preconditioner

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.

    2014-01-01

    This paper presents our new solver BCCG+FAI for solving elastic normal contact problems. This is a comprehensible approach that is based on the Conjugate Gradients (CG) algorithm and that uses FFTs. A first novel aspect is the definition of the “FFT-based Approximate Inverse” preconditioner. The underlying idea is that the inverse matrix can be approximated well using a Toeplitz or block-Toeplitz form, which can be computed using the FFT of the original matrix elements. This preconditioner makes the total number of CG iterations effectively constant in 2D and very slowly increasing in 3D problems. A second novelty is how we deal with a prescribed total force. This uses a deflation technique in such a way that CGs convergence and finite termination properties are maintained. Numerical results show that this solver is more effective than existing CG-based strategies, such that it can compete with Multi-Grid strategies over a much larger problem range. In our opinion it could be the new method of choice because of its simple structure and elegant theory, and because robust performance is achieved independently of any problem specific parameters.

  11. Investigation of hidden periodic structures on SEM images of opal-like materials using FFT and IFFT.

    PubMed

    Stephant, Nicolas; Rondeau, Benjamin; Gauthier, Jean-Pierre; Cody, Jason A; Fritsch, Emmanuel

    2014-01-01

    We have developed a method to use fast Fourier transformation (FFT) and inverse fast Fourier transformation (IFFT) to investigate hidden periodic structures on SEM images. We focused on samples of natural, play-of-color opals that diffract visible light and hence are periodically structured. Conventional sample preparation by hydrofluoric acid etch was not used; untreated, freshly broken surfaces were examined at low magnification relative to the expected period of the structural features, and, the SEM was adjusted to get a very high number of pixels in the images. These SEM images were treated by software to calculate autocorrelation, FFT, and IFFT. We present how we adjusted SEM acquisition parameters for best results. We first applied our procedure on an SEM image on which the structure was obvious. Then, we applied the same procedure on a sample that must contain a periodic structure because it diffracts visible light, but on which no structure was visible on the SEM image. In both cases, we obtained clearly periodic patterns that allowed measurements of structural parameters. We also investigated how the irregularly broken surface interfered with the periodic structure to produce additional periodicity. We tested the limits of our methodology with the help of simulated images. PMID:24752811

  12. Density measurement of yarn dyed woven fabrics based on dual-side scanning and the FFT technique

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Xin, Binjie; Wu, Xiangji

    2014-11-01

    The yarn density measurement, as part of fabric analysis, is very important for the textile manufacturing process and is traditionally based on single-side analysis. In this paper, a new method, suitable for yarn dyed woven fabrics, is developed, based on dual-side scanning and the fast Fourier transform (FFT) technique for yarn density measurement, instead of one-side image analysis. Firstly, the dual-side scanning method based on the Radon transform (RT) is used for the image registration of both side images of the woven fabric; a lab-used imaging system is established to capture the images of each side. Secondly, the merged image from the dual-side fabric images can be generated using three self-developed image fusion methods. Thirdly, the yarn density can be measured based on the merged image using FFT and inverse fast Fourier transform (IFFT) processing. The effects of yarn color and weave pattern on the density measurement have been investigated for the optimization of the proposed method. Our experimental results show that the proposed method works better than the conventional analysis method in terms of both the accuracy and robustness.

  13. A comparative study on low-memory iterative solvers for FFT-based homogenization of periodic media

    NASA Astrophysics Data System (ADS)

    Mishra, Nachiketa; Vondřejc, Jaroslav; Zeman, Jan

    2016-09-01

    In this paper, we assess the performance of four iterative algorithms for solving non-symmetric rank-deficient linear systems arising in the FFT-based homogenization of heterogeneous materials defined by digital images. Our framework is based on the Fourier-Galerkin method with exact and approximate integrations that has recently been shown to generalize the Lippmann-Schwinger setting of the original work by Moulinec and Suquet from 1994. It follows from this variational format that the ensuing system of linear equations can be solved by general-purpose iterative algorithms for symmetric positive-definite systems, such as the Richardson, the Conjugate gradient, and the Chebyshev algorithms, that are compared here to the Eyre-Milton scheme - the most efficient specialized method currently available. Our numerical experiments, carried out for two-dimensional elliptic problems, reveal that the Conjugate gradient algorithm is the most efficient option, while the Eyre-Milton method performs comparably to the Chebyshev semi-iteration. The Richardson algorithm, equivalent to the still widely used original Moulinec-Suquet solver, exhibits the slowest convergence. Besides this, we hope that our study highlights the potential of the well-established techniques of numerical linear algebra to further increase the efficiency of FFT-based homogenization methods.

  14. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  15. Modular robotic architecture

    NASA Astrophysics Data System (ADS)

    Smurlo, Richard P.; Laird, Robin T.

    1991-03-01

    The development of control architectures for mobile systems is typically a task undertaken with each new application. These architectures address different operational needs and tend to be difficult to adapt to more than the problem at hand. The development of a flexible and extendible control system with evolutionary growth potential for use on mobile robots will help alleviate these problems and if made widely available will promote standardization and cornpatibility among systems throughout the industry. The Modular Robotic Architecture (MRA) is a generic control systern that meets the above needs by providing developers with a standard set of software hardware tools that can be used to design modular robots (MODBOTs) with nearly unlimited growth potential. The MODBOT itself is a generic creature that must be customized by the developer for a particular application. The MRA facilitates customization of the MODBOT by providing sensor actuator and processing modules that can be configured in almost any manner as demanded by the application. The Mobile Security Robot (MOSER) is an instance of a MODBOT that is being developed using the MRA. Navigational Sonar Module RF Link Control Station Module hR Link Detection Module Near hR Proximi Sensor Module Fluxgate Compass and Rate Gyro Collision Avoidance Sonar Module Figure 1. Remote platform module configuration of the Mobile Security Robot (MOSER). Acoustical Detection Array Stereoscopic Pan and Tilt Module High Level Processing Module Mobile Base 566

  16. Adaptive Controller for Compact Fourier Transform Spectrometer with Space Applications

    NASA Astrophysics Data System (ADS)

    Keymeulen, D.; Yiu, P.; Berisford, D. F.; Hand, K. P.; Carlson, R. W.; Conroy, M.

    2014-12-01

    Here we present noise mitigation techniques developed as part of an adaptive controller for a very compact Compositional InfraRed Interferometric Spectrometer (CIRIS) implemented on a stand-alone field programmable gate array (FPGA) architecture with emphasis on space applications in high radiation environments such as Europa. CIRIS is a novel take on traditional Fourier Transform Spectrometers (FTS) and replaces linearly moving mirrors (characteristic of Michelson interferometers) with a constant-velocity rotating refractor to variably phase shift and alter the path length of incoming light. The design eschews a monochromatic reference laser typically used for sampling clock generation and instead utilizes constant time-sampling via internally generated clocks. This allows for a compact and robust device, making it ideal for spaceborne measurements in the near-IR to thermal-IR band (2-12 µm) on planetary exploration missions. The instrument's embedded microcontroller is implemented on a VIRTEX-5 FPGA and a PowerPC with the aim of sampling the instrument's detector and optical rotary encoder in order to construct interferograms. Subsequent onboard signal processing provides spectral immunity from the noise effects introduced by the compact design's removal of a reference laser and by the radiation encountered during space flight to destinations such as Europa. A variety of signal processing techniques including resampling, radiation peak removal, Fast Fourier Transform (FFT), spectral feature alignment, dispersion correction and calibration processes are applied to compose the sample spectrum in real-time with signal-to-noise-ratio (SNR) performance comparable to laser-based FTS designs in radiation-free environments. The instrument's FPGA controller is demonstrated with the FTS to characterize its noise mitigation techniques and highlight its suitability for implementation in space systems.

  17. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks

  18. Spectral domain analysis of conducting patches of arbitrary geometry in multilayer media using the CG-FFT method

    NASA Astrophysics Data System (ADS)

    Catedra, Manuel F.; Gago, Emilio

    1990-10-01

    A conjugate-gradient fast-Fourier-transform (CG-FFT) scheme for analyzing finite flat metallic patches in multilayer structures is presented. Rooftop and razor-blade functions are considered as basis and testing functions, respectively. An equivalent periodic problem in both domains (real and spectral) is obtained and solved. Aliasing problems are avoided by performing a window on the Green's function. The spectral domain periodicity makes it feasible to take into account almost all the harmonics and to reduce the ripple in the computed current distributions. Nearly all the operations are performed in the spectral domain, including Green's function computations. Several results of convergence rates, current distributions and radar cross-section values are given and compare favorably with measurements or results obtained by other methods.

  19. An efficient hybrid MLFMA-FFT solver for the volume integral equation in case of sparse 3D inhomogeneous dielectric scatterers

    SciTech Connect

    Zaeytijd, J. de Bogaert, I.; Franchois, A.

    2008-07-01

    Electromagnetic scattering problems involving inhomogeneous objects can be numerically solved by applying a Method of Moments discretization to the volume integral equation. For electrically large problems, the iterative solution of the resulting linear system is expensive, both computationally and in memory use. In this paper, a hybrid MLFMA-FFT method is presented, which combines the fast Fourier transform (FFT) method and the High Frequency Multilevel Fast Multipole Algorithm (MLFMA) in order to reduce the cost of the matrix-vector multiplications needed in the iterative solver. The method represents the scatterers within a set of possibly disjoint identical cubic subdomains, which are meshed using a uniform cubic grid. This specific mesh allows for the application of FFTs to calculate the near interactions in the MLFMA and reduces the memory cost considerably, since the aggregation and disaggregation matrices of the MLFMA can be reused. Additional improvements to the general MLFMA framework, such as an extention of the FFT interpolation scheme of Sarvas et al. from the scalar to the vectorial case in combination with a more economical representation of the radiation patterns on the lowest level in vector spherical harmonics, are proposed and the choice of the subdomain size is discussed. The hybrid method performs better in terms of speed and memory use on large sparse configurations than both the FFT method and the HF MLFMA separately and it has lower memory requirements on general large problems. This is illustrated on a number of representative numerical test cases.

  20. Simulation of a Reconfigurable Adaptive Control Architecture

    NASA Astrophysics Data System (ADS)

    Rapetti, Ryan John

    A set of algorithms and software components are developed to investigate the use of a priori models of damaged aircraft to improve control of similarly damaged aircraft. An addition to Model Predictive Control called state trajectory extrapolation is also developed to deliver good handling qualities in nominal an off-nominal aircraft. System identification algorithms are also used to improve model accuracy after a damage event. Simulations were run to demonstrate the efficacy of the algorithms and software components developed herein. The effect of model order on system identification convergence and performance is also investigated. A feasibility study for flight testing is also conducted. A preliminary hardware prototype was developed, as was the necessary software to integrate the avionics and ground station systems. Simulation results show significant improvement in both tracking and cross-coupling performance when a priori control models are used, and further improvement when identified models are used.

  1. Evolution of genome architecture.

    PubMed

    Koonin, Eugene V

    2009-02-01

    Charles Darwin believed that all traits of organisms have been honed to near perfection by natural selection. The empirical basis underlying Darwin's conclusions consisted of numerous observations made by him and other naturalists on the exquisite adaptations of animals and plants to their natural habitats and on the impressive results of artificial selection. Darwin fully appreciated the importance of heredity but was unaware of the nature and, in fact, the very existence of genomes. A century and a half after the publication of the "Origin", we have the opportunity to draw conclusions from the comparisons of hundreds of genome sequences from all walks of life. These comparisons suggest that the dominant mode of genome evolution is quite different from that of the phenotypic evolution. The genomes of vertebrates, those purported paragons of biological perfection, turned out to be veritable junkyards of selfish genetic elements where only a small fraction of the genetic material is dedicated to encoding biologically relevant information. In sharp contrast, genomes of microbes and viruses are incomparably more compact, with most of the genetic material assigned to distinct biological functions. However, even in these genomes, the specific genome organization (gene order) is poorly conserved. The results of comparative genomics lead to the conclusion that the genome architecture is not a straightforward result of continuous adaptation but rather is determined by the balance between the selection pressure, that is itself dependent on the effective population size and mutation rate, the level of recombination, and the activity of selfish elements. Although genes and, in many cases, multigene regions of genomes possess elaborate architectures that ensure regulation of expression, these arrangements are evolutionarily volatile and typically change substantially even on short evolutionary scales when gene sequences diverge minimally. Thus, the observed genome

  2. Lab architecture

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  3. Adaptive transfer functions

    SciTech Connect

    Goulding, J.R. )

    1991-01-01

    This paper details the approach and methodology used to build adaptive transfer functions in a feed-forward Back-Propagation neural network, and provides insight into the structure dependent properties of using non-scaled analog inputs. The results of using adaptive transfer functions are shown to outperform conventional architectures in the implementation of a mechanical power transmission gearbox design expert system knowledge base. 4 refs., 4 figs., 1 tab.

  4. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  5. Adaptive network countermeasures.

    SciTech Connect

    McClelland-Bane, Randy; Van Randwyk, Jamie A.; Carathimas, Anthony G.; Thomas, Eric D.

    2003-10-01

    This report describes the results of a two-year LDRD funded by the Differentiating Technologies investment area. The project investigated the use of countermeasures in protecting computer networks as well as how current countermeasures could be changed in order to adapt with both evolving networks and evolving attackers. The work involved collaboration between Sandia employees and students in the Sandia - California Center for Cyber Defenders (CCD) program. We include an explanation of the need for adaptive countermeasures, a description of the architecture we designed to provide adaptive countermeasures, and evaluations of the system.

  6. gEMfitter: a highly parallel FFT-based 3D density fitting tool with GPU texture memory acceleration.

    PubMed

    Hoang, Thai V; Cavin, Xavier; Ritchie, David W

    2013-11-01

    Fitting high resolution protein structures into low resolution cryo-electron microscopy (cryo-EM) density maps is an important technique for modeling the atomic structures of very large macromolecular assemblies. This article presents "gEMfitter", a highly parallel fast Fourier transform (FFT) EM density fitting program which can exploit the special hardware properties of modern graphics processor units (GPUs) to accelerate both the translational and rotational parts of the correlation search. In particular, by using the GPU's special texture memory hardware to rotate 3D voxel grids, the cost of rotating large 3D density maps is almost completely eliminated. Compared to performing 3D correlations on one core of a contemporary central processor unit (CPU), running gEMfitter on a modern GPU gives up to 26-fold speed-up. Furthermore, using our parallel processing framework, this speed-up increases linearly with the number of CPUs or GPUs used. Thus, it is now possible to use routinely more robust but more expensive 3D correlation techniques. When tested on low resolution experimental cryo-EM data for the GroEL-GroES complex, we demonstrate the satisfactory fitting results that may be achieved by using a locally normalised cross-correlation with a Laplacian pre-filter, while still being up to three orders of magnitude faster than the well-known COLORES program. PMID:24060989

  7. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    NASA Technical Reports Server (NTRS)

    Barkeshli, Kasra; Volakis, John L.

    1991-01-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  8. Adaptive nonlinear flight control

    NASA Astrophysics Data System (ADS)

    Rysdyk, Rolf Theoduor

    1998-08-01

    Research under supervision of Dr. Calise and Dr. Prasad at the Georgia Institute of Technology, School of Aerospace Engineering. has demonstrated the applicability of an adaptive controller architecture. The architecture successfully combines model inversion control with adaptive neural network (NN) compensation to cancel the inversion error. The tiltrotor aircraft provides a specifically interesting control design challenge. The tiltrotor aircraft is capable of converting from stable responsive fixed wing flight to unstable sluggish hover in helicopter configuration. It is desirable to provide the pilot with consistency in handling qualities through a conversion from fixed wing flight to hover. The linear model inversion architecture was adapted by providing frequency separation in the command filter and the error-dynamics, while not exiting the actuator modes. This design of the architecture provides for a model following setup with guaranteed performance. This in turn allowed for convenient implementation of guaranteed handling qualities. A rigorous proof of boundedness is presented making use of compact sets and the LaSalle-Yoshizawa theorem. The analysis allows for the addition of the e-modification which guarantees boundedness of the NN weights in the absence of persistent excitation. The controller is demonstrated on the Generic Tiltrotor Simulator of Bell-Textron and NASA Ames R.C. The model inversion implementation is robustified with respect to unmodeled input dynamics, by adding dynamic nonlinear damping. A proof of boundedness of signals in the system is included. The effectiveness of the robustification is also demonstrated on the XV-15 tiltrotor. The SHL Perceptron NN provides a more powerful application, based on the universal approximation property of this type of NN. The SHL NN based architecture is also robustified with the dynamic nonlinear damping. A proof of boundedness extends the SHL NN augmentation with robustness to unmodeled actuator

  9. Space Telecommunications Radio Architecture (STRS): Technical Overview

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  10. Architecture-Centric Software Quality Management

    NASA Astrophysics Data System (ADS)

    Maciaszek, Leszek A.

    Software quality is a multi-faceted concept defined using different attributes and models. From all various quality requirements, the quality of adaptiveness is by far most critical. Based on this assumption, this paper offers an architecture-centric approach to production of measurably-adaptive systems. The paper uses the PCBMER (Presentation, Controller, Bean, Mediator, Entity, and Resource) meta-architecture to demonstrate how complexity of a software solution can be measured and kept under control in standalone applications. Meta-architectural extensions aimed at managing quality in integration development projects are also introduced. The DSM (Design Structure Matrix) method is used to explain our approach to measure the quality. The discussion is conducted against the background of the holonic approach to science (as the middle-ground between holism and reductionism).

  11. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  12. Post and Lintel Architecture

    ERIC Educational Resources Information Center

    Daniel, Robert A.

    1973-01-01

    Author finds that children understand architectural concepts more readily when he refers to familiar non-architectural examples of them such as goal posts, chairs, tables, and playground equipment. (GB)

  13. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  14. Middleware Architecture Evaluation for Dependable Self-managing Systems

    SciTech Connect

    Liu, Yan; Babar, Muhammad A.; Gorton, Ian

    2008-10-10

    Middleware provides infrastructure support for creating dependable software systems. A specific middleware implementation plays a critical role in determining the quality attributes that satisfy a system’s dependability requirements. Evaluating a middleware architecture at an early development stage can help to pinpoint critical architectural challenges and optimize design decisions. In this paper, we present a method and its application to evaluate middleware architectures, driven by emerging architecture patterns for developing self-managing systems. Our approach focuses on two key attributes of dependability, reliability and maintainability by means of fault tolerance and fault prevention. We identify the architectural design patterns necessary to build an adaptive self-managing architecture that is capable of preventing or recovering from failures. These architectural patterns and their impacts on quality attributes create the context for middleware evaluation. Our approach is demonstrated by an example application -- failover control of a financial application on an enterprise service bus.

  15. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  16. UMTS network architecture

    NASA Astrophysics Data System (ADS)

    Katoen, J. P.; Saiedi, A.; Baccaro, I.

    1994-05-01

    This paper proposes a Functional Architecture and a corresponding Network Architecture for the Universal Mobile Telecommunication System (UMTS). Procedures like call handling, location management, and handover are considered. The architecture covers the domestic, business, and public environments. Integration with existing and forthcoming networks for fixed communications is anticipated and the Intelligent Network (IN) philosophy is applied.

  17. Adaptive sensor fusion

    NASA Astrophysics Data System (ADS)

    Kadar, Ivan

    1995-07-01

    A perceptual reasoning system adaptively extracting, associating, and fusing information from multiple sources, at various levels of abstraction, is considered as the building block for the next generation of surveillance systems. A system architecture is presented which makes use of both centralized and distributed predetection fusion combined with intelligent monitor and control coupling both on-platform and off-board track and decision level fusion results. The goal of this system is to create a `gestalt fused sensor system' whose information product is greater than the sum of the information products from the individual sensors and has performance superior to either individual or a sub-group of combined sensors. The application of this architectural concept to the law enforcement arena (e.g. drug interdiction) utilizing multiple spatially and temporally diverse surveillance platforms and/or information sources, is used to illustrate the benefits of the adaptive perceptual reasoning system concept.

  18. GTE: a new FFT based software to compute terrain correction on airborne gravity surveys in spherical approximation.

    NASA Astrophysics Data System (ADS)

    Capponi, Martina; Sampietro, Daniele; Sansò, Fernando

    2016-04-01

    The computation of the vertical attraction due to the topographic masses (Terrain Correction) is still a matter of study both in geodetic as well as in geophysical applications. In fact it is required in high precision geoid estimation by the remove-restore technique and it is used to isolate the gravitational effect of anomalous masses in geophysical exploration. This topographical effect can be evaluated from the knowledge of a Digital Terrain Model in different ways: e.g. by means of numerical integration, by prisms, tesseroids, polyedra or Fast Fourier Transform (FFT) techniques. The increasing resolution of recently developed digital terrain models, the increasing number of observation points due to extensive use of airborne gravimetry and the increasing accuracy of gravity data represents nowadays major issues for the terrain correction computation. Classical methods such as prism or point masses approximations are indeed too slow while Fourier based techniques are usually too approximate for the required accuracy. In this work a new software, called Gravity Terrain Effects (GTE), developed in order to guarantee high accuracy and fast computation of terrain corrections is presented. GTE has been thought expressly for geophysical applications allowing the computation not only of the effect of topographic and bathymetric masses but also those due to sedimentary layers or to the Earth crust-mantle discontinuity (the so called Moho). In the present contribution we summarize the basic theory of the software and its practical implementation. Basically the GTE software is based on a new algorithm which, by exploiting the properties of the Fast Fourier Transform, allows to quickly compute the terrain correction, in spherical approximation, at ground or airborne level. Some tests to prove its performances are also described showing GTE capability to compute high accurate terrain corrections in a very short time. Results obtained for a real airborne survey with GTE

  19. Adaptation of gasdynamical codes to the modern supercomputers

    NASA Astrophysics Data System (ADS)

    Kaygorodov, P. V.

    2016-02-01

    During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.

  20. Towards the Architecture of an Instructional Multimedia Database.

    ERIC Educational Resources Information Center

    Verhagen, Plin W.; Bestebreurtje, R.

    1994-01-01

    Discussion of multimedia databases in education focuses on the development of an adaptable database in The Netherlands that uses optical storage media to hold the audiovisual components. Highlights include types of applications; types of users; accessibility; adaptation; an object-oriented approach; levels of the database architecture; and…

  1. Unifying parametrized VLSI Jacobi algorithms and architectures

    NASA Astrophysics Data System (ADS)

    Deprettere, Ed F. A.; Moonen, Marc

    1993-11-01

    Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.

  2. World Ships - Architectures & Feasibility Revisited

    NASA Astrophysics Data System (ADS)

    Hein, A. M.; Pak, M.; Putz, D.; Buhler, C.; Reiss, P.

    A world ship is a concept for manned interstellar flight. It is a huge, self-contained and self-sustained interstellar vehicle. It travels at a fraction of a per cent of the speed of light and needs several centuries to reach its target star system. The well- known world ship concept by Alan Bond and Anthony Martin was intended to show its principal feasibility. However, several important issues haven't been addressed so far: the relationship between crew size and robustness of knowledge transfer, reliability, and alternative mission architectures. This paper addresses these gaps. Furthermore, it gives an update on target star system choice, and develops possible mission architectures. The derived conclusions are: a large population size leads to robust knowledge transfer and cultural adaptation. These processes can be improved by new technologies. World ship reliability depends on the availability of an automatic repair system, as in the case of the Daedalus probe. Star systems with habitable planets are probably farther away than systems with enough resources to construct space colonies. Therefore, missions to habitable planets have longer trip times and have a higher risk of mission failure. On the other hand, the risk of constructing colonies is higher than to establish an initial settlement on a habitable planet. Mission architectures with precursor probes have the potential to significantly reduce trip and colonization risk without being significantly more costly than architectures without. In summary world ships remain an interesting concept, although they require a space colony-based civilization within our own solar system before becoming feasible.

  3. An FFT-based method for modeling protein folding and binding under crowding: benchmarking on ellipsoidal and all-atom crowders.

    PubMed

    Qin, Sanbo; Zhou, Huan-Xiang

    2013-10-01

    It is now well recognized that macromolecular crowding can exert significant effects on protein folding and binding stability. In order to calculate such effects in direct simulations of proteins mixed with bystander macromolecules, the latter (referred to as crowders) are usually modeled as spheres and the proteins represented at a coarse-grained level. Our recently developed postprocessing approach allows the proteins to be represented at the all-atom level but, for computational efficiency, has only been implemented for spherical crowders. Modeling crowder molecules in cellular environments and in vitro experiments as spheres may distort their effects on protein stability. Here we present a new method that is capable for treating aspherical crowders. The idea, borrowed from protein-protein docking, is to calculate the excess chemical potential of the proteins in crowded solution by fast Fourier transform (FFT). As the first application, we studied the effects of ellipsoidal crowders on the folding and binding free energies of all-atom proteins, and found, in agreement with previous direct simulations with coarse-grained protein models, that the aspherical crowders exert greater stabilization effects than spherical crowders of the same volume. Moreover, as demonstrated here, the FFT-based method has the important property that its computational cost does not increase strongly even when the level of details in representing the crowders is increased all the way to all-atom, thus significantly accelerating realistic modeling of protein folding and binding in cell-like environments. PMID:24187527

  4. Grid Architecture 2

    SciTech Connect

    Taft, Jeffrey D.

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  5. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  6. The Technology of Architecture

    ERIC Educational Resources Information Center

    Reese, Susan

    2006-01-01

    This article discusses how career and technical education is helping students draw up plans for success in architectural technology. According to the College of DuPage (COD) in Glen Ellyn, Illinois, one of the two-year schools offering training in architectural technology, graduates have a number of opportunities available to them. They may work…

  7. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  8. Robotic Intelligence Kernel: Architecture

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  9. Clinical document architecture.

    PubMed

    Heitmann, Kai

    2003-01-01

    The Clinical Document Architecture (CDA), a standard developed by the Health Level Seven organisation (HL7), is an ANSI approved document architecture for exchange of clinical information using XML. A CDA document is comprised of a header with associated vocabularies and a body containing the structural clinical information. PMID:15061557

  10. Generic POCC architectures

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This document describes a generic POCC (Payload Operations Control Center) architecture based upon current POCC software practice, and several refinements to the architecture based upon object-oriented design principles and expected developments in teleoperations. The current-technology generic architecture is an abstraction based upon close analysis of the ERBS, COBE, and GRO POCC's. A series of three refinements is presented: these may be viewed as an approach to a phased transition to the recommended architecture. The third refinement constitutes the recommended architecture, which, together with associated rationales, will form the basis of the rapid synthesis environment to be developed in the remainder of this task. The document is organized into two parts. The first part describes the current generic architecture using several graphical as well as tabular representations or 'views.' The second part presents an analysis of the generic architecture in terms of object-oriented principles. On the basis of this discussion, refinements to the generic architecture are presented, again using a combination of graphical and tabular representations.

  11. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  12. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  13. FTS2000 network architecture

    NASA Technical Reports Server (NTRS)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  14. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  15. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  16. Architectural design for resilience

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Deters, Ralph; Zhang, W. J.

    2010-05-01

    Resilience has become a new nonfunctional requirement for information systems. Many design decisions have to be made at the architectural level in order to deliver an information system with the resilience property. This paper discusses the relationships between resilience and other architectural properties such as scalability, reliability, and consistency. A corollary is derived from the CAP theorem, and states that it is impossible for a system to have all three properties of consistency, resilience and partition-tolerance. We present seven architectural constraints for resilience. The constraints are elicited from good architectural practices for developing reliable and fault-tolerant systems and the state-of-the-art technologies in distributed computing. These constraints provide a comprehensive reference for architectural design towards resilience.

  17. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  18. Methodology requirements for intelligent systems architecture

    NASA Technical Reports Server (NTRS)

    Grant, Terry; Colombano, Silvano

    1987-01-01

    The methodology required for the development of the 'intelligent system architecture' of distributed computer systems which integrate standard data processing capabilities with symbolic processing to provide powerful and highly autonomous adaptive processing capabilities must encompass three elements: (1) a design knowledge capture system, (2) computer-aided engineering, and (3) verification and validation metrics and tests. Emphasis must be put on the earliest possible definition of system requirements and the realistic definition of allowable system uncertainties. Methodologies must also address human factor issues.

  19. A global distributed storage architecture

    NASA Technical Reports Server (NTRS)

    Lionikis, Nemo M.; Shields, Michael F.

    1996-01-01

    NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.

  20. Rational design of helical architectures

    PubMed Central

    Chakrabarti, Dwaipayan; Fejer, Szilard N.; Wales, David J.

    2009-01-01

    Nature has mastered the art of creating complex structures through self-assembly of simpler building blocks. Adapting such a bottom-up view provides a potential route to the fabrication of novel materials. However, this approach suffers from the lack of a sufficiently detailed understanding of the noncovalent forces that hold the self-assembled structures together. Here we demonstrate that nature can indeed guide us, as we explore routes to helicity with achiral building blocks driven by the interplay between two competing length scales for the interactions, as in DNA. By characterizing global minima for clusters, we illustrate several realizations of helical architecture, the simplest one involving ellipsoids of revolution as building blocks. In particular, we show that axially symmetric soft discoids can self-assemble into helical columnar arrangements. Understanding the molecular origin of such spatial organisation has important implications for the rational design of materials with useful optoelectronic applications.

  1. Architectural design for space tourism

    NASA Astrophysics Data System (ADS)

    Martinez, Vera

    2009-01-01

    The paper describes the main issues for the design of an appropriately planned habitat for tourists in space. Due study and analysis of the environment of space stations (ISS, MIR, Skylab) delineate positive and negative aspects of architectonical design. Analysis of the features of architectonical design for touristic needs and verification of suitability with design for space habitat. Space tourism environment must offer a high degree of comfort and suggest correct behavior of the tourists. This is intended for the single person as well as for the group. Two main aspects of architectural planning will be needed: the design of the private sphere and the design of the public sphere. To define the appearance of environment there should be paid attention to some main elements like the materiality of surfaces used; the main shapes of areas and the degree of flexibility and adaptability of the environment to specific needs.

  2. Context Aware Middleware Architectures: Survey and Challenges

    PubMed Central

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988

  3. Context Aware Middleware Architectures: Survey and Challenges.

    PubMed

    Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio

    2015-01-01

    Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988

  4. Fast notification architecture for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hahk

    2013-03-01

    In an emergency, since it is vital to transmit the message to the users immediately after analysing the data to prevent disaster, this article presents the deployment of a fast notification architecture for a wireless sensor network. The sensor nodes of the proposed architecture can monitor an emergency situation periodically and transmit the sensing data, immediately to the sink node. We decide on the grade of fire situation according to the decision rule using the sensing values of temperature, CO, smoke density and temperature increasing rate. On the other hand, to estimate the grade of air pollution, the sensing data, such as dust, formaldehyde, NO2, CO2, is applied to the given knowledge model. Since the sink node in the architecture has a ZigBee interface, it can transmit the alert messages in real time according to analysed results received from the host server to the terminals equipped with a SIM card-type ZigBee module. Also, the host server notifies the situation to the registered users who have cellular phone through short message service server of the cellular network. Thus, the proposed architecture can adapt an emergency situation dynamically compared to the conventional architecture using video processing. In the testbed, after generating air pollution and fire data, the terminal receives the message in less than 3 s. In the test results, this system can also be applied to buildings and public areas where many people gather together, to prevent unexpected disasters in urban settings.

  5. Fractal Geometry of Architecture

    NASA Astrophysics Data System (ADS)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  6. Beethoven: architecture for media telephony

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Ohtonen, Timo; Sauvola, Jaakko J.

    1999-11-01

    This paper presents a new architecture and techniques for media-based telephony over wireless/wireline IP networks, called `Beethoven'. The platform supports complex media transport and mobile conferencing for multi-user environments having a non-uniform access. New techniques are presented to provide advanced multimedia call management over different media types and their presentation. The routing and distribution of the media is rendered over the standards based protocol. Our approach offers a generic, distributed and object-oriented solution having interfaces, where signal processing and unified messaging algorithms are embedded as instances of core classes. The platform services are divided into `basic communication', `conferencing' and `media session'. The basic communication form platform core services and supports access from scalable user interface to network end-points. Conferencing services take care of media filter adaptation, conversion, error resiliency, multi-party connection and event signaling, while the media session services offer resources for application-level communication between the terminals. The platform allows flexible attachment of any number of plug-in modules, and thus we use it as a test bench for multiparty/multi-point conferencing and as an evaluation bench for signal coding algorithms. In tests, our architecture showed the ability to easily be scaled from simple voice terminal to complex multi-user conference sharing virtual data.

  7. Mitigation of numerical Cerenkov radiation and instability using a hybrid finite difference-FFT Maxwell solver and a local charge conserving current deposit

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng; Xu, Xinlu; Tableman, Adam; Decyk, Viktor K.; Tsung, Frank S.; Fiuza, Frederico; Davidson, Asher; Vieira, Jorge; Fonseca, Ricardo A.; Lu, Wei; Silva, Luis O.; Mori, Warren B.

    2015-12-01

    A hybrid Maxwell solver for fully relativistic and electromagnetic (EM) particle-in-cell (PIC) codes is described. In this solver, the EM fields are solved in k space by performing an FFT in one direction, while using finite difference operators in the other direction(s). This solver eliminates the numerical Cerenkov radiation for particles moving in the preferred direction. Moreover, the numerical Cerenkov instability (NCI) induced by the relativistically drifting plasma and beam can be eliminated using this hybrid solver by applying strategies that are similar to those recently developed for pure FFT solvers. A current correction is applied for the charge conserving current deposit to ensure that Gauss's Law is satisfied. A theoretical analysis of the dispersion properties in vacuum and in a drifting plasma for the hybrid solver is presented, and compared with PIC simulations with good agreement obtained. This hybrid solver is applied to both 2D and 3D Cartesian and quasi-3D (in which the fields and current are decomposed into azimuthal harmonics) geometries. Illustrative results for laser wakefield accelerator simulation in a Lorentz boosted frame using the hybrid solver in the 2D Cartesian geometry are presented, and compared against results from 2D UPIC-EMMA simulation which uses a pure spectral Maxwell solver, and from OSIRIS 2D lab frame simulation using the standard Yee solver. Very good agreement is obtained which demonstrates the feasibility of using the hybrid solver for high fidelity simulation of relativistically drifting plasma with no evidence of the numerical Cerenkov instability.

  8. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  9. Assured Mission Support Space Architecture (AMSSA) study

    NASA Technical Reports Server (NTRS)

    Hamon, Rob

    1993-01-01

    The assured mission support space architecture (AMSSA) study was conducted with the overall goal of developing a long-term requirements-driven integrated space architecture to provide responsive and sustained space support to the combatant commands. Although derivation of an architecture was the focus of the study, there are three significant products from the effort. The first is a philosophy that defines the necessary attributes for the development and operation of space systems to ensure an integrated, interoperable architecture that, by design, provides a high degree of combat utility. The second is the architecture itself; based on an interoperable system-of-systems strategy, it reflects a long-range goal for space that will evolve as user requirements adapt to a changing world environment. The third product is the framework of a process that, when fully developed, will provide essential information to key decision makers for space systems acquisition in order to achieve the AMSSA goal. It is a categorical imperative that military space planners develop space systems that will act as true force multipliers. AMSSA provides the philosophy, process, and architecture that, when integrated with the DOD requirements and acquisition procedures, can yield an assured mission support capability from space to the combatant commanders. An important feature of the AMSSA initiative is the participation by every organization that has a role or interest in space systems development and operation. With continued community involvement, the concept of the AMSSA will become a reality. In summary, AMSSA offers a better way to think about space (philosophy) that can lead to the effective utilization of limited resources (process) with an infrastructure designed to meet the future space needs (architecture) of our combat forces.

  10. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  11. Tagged token dataflow architecture

    SciTech Connect

    Arvind; Culler, D.E.

    1983-10-01

    The demand for large-scale multiprocessor systems has been substantial for many years. The technology for fabrication of such systems is available, but attempts to extend traditional architectures to this context have met with only mild success. The authors hold that fundamental aspects of the Von Neumann architecture prohibit its extension to multiprocessor systems; they pose dataflow architectures as an alternative. These two approaches are contrasted on issues of synchronization, memory latency, and the ability to share data without constraining parallelism. 12 references.

  12. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  13. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  14. Using natural variation to investigate the function of individual amino acids in the sucrose-binding box of fructan:fructan 6G-fructosyltransferase (6G-FFT) in product formation.

    PubMed

    Ritsema, Tita; Verhaar, Auke; Vijn, Irma; Smeekens, Sjef

    2005-07-01

    Enzymes of the glycosyl hydrolase family 32 are highly similar with respect to primary sequence but catalyze divergent reactions. Previously, the importance of the conserved sucrose-binding box in determining product specificity of onion fructan:fructan 6G-fructosyltransferase (6G-FFT) was established [Ritsema et al., 2004, Plant Mol. Biol. 54: 853-863]. Onion 6G-FFT synthesizes the complex fructan neo-series inulin by transferring fructose residues to either a terminal fructose or a terminal glucose residue. In the present study we have elucidated the molecular determinants of product specificity by substitution of individual amino acids of the sucrose binding box with amino acids that are present on homologous positions in other fructosyltransferases or vacuolar invertases. Substituting the presumed nucleophile Asp85 of the beta-fructosidase motif resulted in an inactive enzyme. 6G-FFT mutants S87N and S87D did not change substrate or product specificities, whereas mutants N84Y and N84G resulted in an inactive enzyme. Most interestingly, mutants N84S, N84A, and N84Q added fructose residues preferably to a terminal fructose and hardly to the terminal glucose. This resulted in the preferential production of inulin-type fructans. Combining mutations showed that amino acid 84 determines product specificity of 6G-FFT irrespective of the amino acid at position 87. PMID:16158237

  15. Look and Do Ancient Greece. Teacher's Manual: Primary Program, Greek Art & Architecture [and] Workbook: The Art and Architecture of Ancient Greece [and] K-4 Videotape. History through Art and Architecture.

    ERIC Educational Resources Information Center

    Luce, Ann Campbell

    This resource, containing a teacher's manual, reproducible student workbook, and a color teaching poster, is designed to accompany a 21-minute videotape program, but may be adapted for independent use. Part 1 of the program, "Greek Architecture," looks at elements of architectural construction as applied to Greek structures, and demonstrates Greek…

  16. Renal adaptation during hibernation

    PubMed Central

    Martin, Sandra L.; Jain, Swati; Keys, Daniel; Edelstein, Charles L.

    2013-01-01

    Hibernators periodically undergo profound physiological changes including dramatic reductions in metabolic, heart, and respiratory rates and core body temperature. This review discusses the effect of hypoperfusion and hypothermia observed during hibernation on glomerular filtration and renal plasma flow, as well as specific adaptations in renal architecture, vasculature, the renin-angiotensin system, and upregulation of possible protective mechanisms during the extreme conditions endured by hibernating mammals. Understanding the mechanisms of protection against organ injury during hibernation may provide insights into potential therapies for organ injury during cold storage and reimplantation during transplantation. PMID:24049148

  17. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  18. Flexible weapons architecture design

    NASA Astrophysics Data System (ADS)

    Pyant, William C., III

    Present day air-delivered weapons are of a closed architecture, with little to no ability to tailor the weapon for the individual engagement. The closed architectures require weaponeers to make the target fit the weapon instead of fitting the individual weapons to a target. The concept of a flexible weapons aims to modularize weapons design using an open architecture shell into which different modules are inserted to achieve the desired target fractional damage while reducing cost and civilian casualties. This thesis shows that the architecture design factors of damage mechanism, fusing, weapons weight, guidance, and propulsion are significant in enhancing weapon performance objectives, and would benefit from modularization. Additionally, this thesis constructs an algorithm that can be used to design a weapon set for a particular target class based on these modular components.

  19. Modular avionic architectures

    NASA Astrophysics Data System (ADS)

    Trujillo, Edward

    The author presents an analysis revealing some of the salient features of modular avionics. A decomposition of the modular avionics concept is performed, highlighting some of the key features of such architectures. Several layers of architecture can be found in such concepts, including those relating to software structure, communication, and supportability. Particular emphasis is placed on the layer relating to partitioning, which gives rise to those features of integration, modularity, and commonality. Where integration is the sharing of common tasks or items to gain efficiency and flexibility, modularity is the partitioning of a system into reconfigurable and maintainable items, and commonality is partitioning to maximize the use of identical items across the range of applications. Two architectures, MASA (Modular Avionics System Architecture) and Pave Pillar, are considered in particular.

  20. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  1. Operational Concepts for a Generic Space Exploration Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Vaden, Karl R.; Jones, Robert E.; Roberts, Anthony M.

    2015-01-01

    This document is one of three. It describes the Operational Concept (OpsCon) for a generic space exploration communication architecture. The purpose of this particular document is to identify communication flows and data types. Two other documents accompany this document, a security policy profile and a communication architecture document. The operational concepts should be read first followed by the security policy profile and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes: subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.

  2. Very large scale integration (VLSI) architectures for video signal processing

    NASA Astrophysics Data System (ADS)

    Pirsch, Peter; Gehrke, Winfried

    1995-04-01

    The paper presents an overview on architectures for VLSI implementations of video compression schemes as specified by standardization committees of the ITU and ISO, focussing on programmable architectures. Programmable video signal processors are classified and specified as homogeneous and heterogeneous processor architectures. Architectures are presented for reported design examples for the literature. Heterogenous processors outperform homogeneous processors because of adaptation to the requirements of special subtasks by dedicated modules. The majority of heterogenous processors incorporate dedicated modules for high performance subtasks of high regularity as DCT and block matching. By normalization to a fictive 1.0 micron CMOS process typical linear relationships between silicon area and through-put rate have been determined for the different architectural styles. This relationship indicated a figure of merit for silicon efficiency.

  3. CORDIC processor architectures

    NASA Astrophysics Data System (ADS)

    Boehme, Johann F.; Timmermann, D.; Hahn, H.; Hosticka, Bedrich J.

    1991-12-01

    As CORDIC algorithms receive more and more attention in elementary function evaluation and signal processing applications, the problem of their VLSI realization has attracted considerable interest. In this work we review the CORDIC fundamentals covering algorithm, architecture, and implementation issues. Various aspects of the CORDIC algorithm are investigated such as efficient scale factor compensation, redundant and non-redundant addition schemes, and convergence domain. Several CORDIC processor architectures and implementation examples are discussed.

  4. Generic Distributed Simulation Architecture

    SciTech Connect

    Booker, C.P.

    1999-05-14

    A Generic Distributed Simulation Architecture is described that allows a simulation to be automatically distributed over a heterogeneous network of computers and executed with very little human direction. A prototype Framework is presented that implements the elements of the Architecture and demonstrates the feasibility of the concepts. It provides a basis for a future, improved Framework that will support legacy models. Because the Framework is implemented in Java, it may be installed on almost any modern computer system.

  5. Towards a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.

  6. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing. PMID:18555727

  7. Adaptive Gaussian pattern classification. Final report

    SciTech Connect

    Priebe, C.E.; Marchette, D.J.

    1988-08-01

    A massively parallel architecture for pattern classification is described. The architecture is based on the field of density estimation. It makes use of a variant of the adaptive-kernel estimator to approximate the distributions of the classes as a sum of Gaussian distributions. These Gaussians are learned using a moved-mean, moving-covariance learning scheme. A temporal ordering scheme is implemented using decay at the input level, allowing the network to learn to recognize sequences. The learning scheme requires a single pass through the data, giving the architecture the capability of real-time learning. The first part of the paper develops the adaptive-kernel estimator. The parallel architecture is then described, and issues relevant to implementation are discussed. Finally, applications to robotic sensor fusion, intended word recognition, and vision are described.

  8. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  9. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  10. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  11. Software Architecture Design Reasoning

    NASA Astrophysics Data System (ADS)

    Tang, Antony; van Vliet, Hans

    Despite recent advancements in software architecture knowledge management and design rationale modeling, industrial practice is behind in adopting these methods. The lack of empirical proofs and the lack of a practical process that can be easily incorporated by practitioners are some of the hindrance for adoptions. In particular, the process to support systematic design reasoning is not available. To rectify this issue, we propose a design reasoning process to help architects cope with an architectural design environment where design concerns are cross-cutting and diversified.We use an industrial case study to validate that the design reasoning process can help improve the quality of software architecture design. The results have indicated that associating design concerns and identifying design options are important steps in design reasoning.

  12. Advanced ground station architecture

    NASA Technical Reports Server (NTRS)

    Zillig, David; Benjamin, Ted

    1994-01-01

    This paper describes a new station architecture for NASA's Ground Network (GN). The architecture makes efficient use of emerging technologies to provide dramatic reductions in size, operational complexity, and operational and maintenance costs. The architecture, which is based on recent receiver work sponsored by the Office of Space Communications Advanced Systems Program, allows integration of both GN and Space Network (SN) modes of operation in the same electronics system. It is highly configurable through software and the use of charged coupled device (CCD) technology to provide a wide range of operating modes. Moreover, it affords modularity of features which are optional depending on the application. The resulting system incorporates advanced RF, digital, and remote control technology capable of introducing significant operational, performance, and cost benefits to a variety of NASA communications and tracking applications.

  13. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  14. Lunar architecture and urbanism

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    1992-01-01

    Human civilization and architecture have defined each other for over 5000 years on Earth. Even in the novel environment of space, persistent issues of human urbanism will eclipse, within a historically short time, the technical challenges of space settlement that dominate our current view. By adding modern topics in space engineering, planetology, life support, human factors, material invention, and conservation to their already renaissance array of expertise, urban designers can responsibly apply ancient, proven standards to the exciting new opportunities afforded by space. Inescapable facts about the Moon set real boundaries within which tenable lunar urbanism and its component architecture must eventually develop.

  15. Synergetics and architecture

    NASA Astrophysics Data System (ADS)

    Maslov, V. P.; Maslova, T. V.

    2008-03-01

    A series of phenomena pertaining to economics, quantum physics, language, literary criticism, and especially architecture is studied from the standpoint of synergetics (the study of self-organizing complex systems). It turns out that a whole series of concrete formulas describing these phenomena is identical in these different situations. This is the case of formulas relating to the Bose-Einstein distribution of particles and the distribution of words from a frequency dictionary. This also allows to apply a "quantized" from of the Zipf law to the problem of the authorship of Quiet Flows the Don and to the "blending in" of new architectural structures in an existing environment.

  16. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  17. Low Power Adder Based Auditory Filter Architecture

    PubMed Central

    Jayanthi, V. S.

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%. PMID:25506073

  18. UAV Cooperation Architectures for Persistent Sensing

    SciTech Connect

    Roberts, R S; Kent, C A; Jones, E D

    2003-03-20

    With the number of small, inexpensive Unmanned Air Vehicles (UAVs) increasing, it is feasible to build multi-UAV sensing networks. In particular, by using UAVs in conjunction with unattended ground sensors, a degree of persistent sensing can be achieved. With proper UAV cooperation algorithms, sensing is maintained even though exceptional events, e.g., the loss of a UAV, have occurred. In this paper a cooperation technique that allows multiple UAVs to perform coordinated, persistent sensing with unattended ground sensors over a wide area is described. The technique automatically adapts the UAV paths so that on the average, the amount of time that any sensor has to wait for a UAV revisit is minimized. We also describe the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture. This architecture is designed to help simulate and operate distributed sensor networks where multiple UAVs are used to collect data.

  19. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  20. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  1. American School & University Architectural Portfolio 2000 Awards: Landscape Architecture.

    ERIC Educational Resources Information Center

    American School & University, 2000

    2000-01-01

    Presents photographs and basic information on architectural design, costs, square footage, and principle designers of the award winning school landscaping projects that competed in the American School & University Architectural Portfolio 2000. (GR)

  2. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    NASA Astrophysics Data System (ADS)

    Gerngross, Mark-Daniel; Carstensen, Jürgen; Föll, Helmut

    2014-06-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor ( RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires.

  3. Comparative analysis of feature extraction (2D FFT and wavelet) and classification (Lp metric distances, MLP NN, and HNeT) algorithms for SAR imagery

    NASA Astrophysics Data System (ADS)

    Sandirasegaram, Nicholas; English, Ryan

    2005-05-01

    The performance of several combinations of feature extraction and target classification algorithms is analyzed for Synthetic Aperture Radar (SAR) imagery using the standard Moving and Stationary Target Acquisition and Recognition (MSTAR) evaluation method. For feature extraction, 2D Fast Fourier Transform (FFT) is used to extract Fourier coefficients (frequency information) while 2D wavelet decomposition is used to extract wavelet coefficients (time-frequency information), from which subsets of characteristic in-class "invariant" coefficients are developed. Confusion matrices and Receiver Operating Characteristic (ROC) curves are used to evaluate and compare combinations of these characteristic coefficients with several classification methods, including Lp metric distances, a Multi Layer Perceptron (MLP) Neural Network (NN) and AND Corporation's Holographic Neural Technology (HNeT) classifier. The evaluation method examines the trade-off between correct detection rate and false alarm rate for each combination of feature-classifier systems. It also measures correct classification, misclassification and rejection rates for a 90% detection rate. Our analysis demonstrates the importance of feature and classifier selection in accurately classifying new target images.

  4. Electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes: FFT-impedance spectroscopy of the growth process and magnetic properties

    PubMed Central

    2014-01-01

    The electrochemical growth of Co nanowires in ultra-high aspect ratio InP membranes has been investigated by fast Fourier transform-impedance spectroscopy (FFT-IS) in the frequency range from 75 Hz to 18.5 kHz. The impedance data could be fitted very well using an electric circuit equivalent model with a series resistance connected in series to a simple resistor-capacitor (RC) element and a Maxwell element. Based on the impedance data, the Co deposition in ultra-high aspect ratio InP membranes can be divided into two different Co deposition processes. The corresponding share of each process on the overall Co deposition can be determined directly from the transfer resistances of the two processes. The impedance data clearly show the beneficial impact of boric acid on the Co deposition and also indicate a diffusion limitation of boric acid in ultra-high aspect ratio InP membranes. The grown Co nanowires are polycrystalline with a very small grain size. They show a narrow hysteresis loop with a preferential orientation of the easy magnetization direction along the long nanowire axis due to the arising shape anisotropy of the Co nanowires. PMID:25050088

  5. A novel layer 1 virtual private network provisioning architecture in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Sun, Ting; Zhang, Jie; Chen, Xiuzhong; Zhao, Yongli; Han, Dahai; Gu, Wanyi; Ji, Yuefeng

    2009-11-01

    A novel multi-domain L1VPN provisioning architecture is proposed based on service plane of the adaptive multiservices provisioning platform. It can provide the inter-domain L1VPN services flexibly, and can establish different L1VPNs by analyzing different service characteristics and constraints. Moreover, the architecture we proposed was experimentally demonstrated in our AMSON testbed.

  6. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  7. 1989 Architectural Exhibition Winners.

    ERIC Educational Resources Information Center

    School Business Affairs, 1990

    1990-01-01

    Winners of the 1989 Architectural Exhibition sponsored annually by the ASBO International's School Facilities Research Committee include the Brevard Performing Arts Center (Melbourne, Florida), the Capital High School (Santa Fe, New Mexico), Gage Elementary School (Rochester, Minnesota), the Lakewood (Ohio) High School Natatorium, and three other…

  8. Emulating an MIMD architecture

    SciTech Connect

    Su Bogong; Grishman, R.

    1982-01-01

    As part of a research effort in parallel processor architecture and programming, the ultracomputer group at New York University has performed extensive simulation of parallel programs. To speed up these simulations, a parallel processor emulator, using the microprogrammable Puma computer system previously designed and built at NYU, has been developed. 8 references.

  9. Embedded instrumentation systems architecture

    NASA Astrophysics Data System (ADS)

    Visnevski, Nikita A.

    2007-04-01

    This paper describes the operational concept of the Embedded Instrumentation Systems Architecture (EISA) that is being developed for Test and Evaluation (T&E) applications. The architecture addresses such future T&E requirements as interoperability, flexibility, and non-intrusiveness. These are the ultimate requirements that support continuous T&E objectives. In this paper, we demonstrate that these objectives can be met by decoupling the Embedded Instrumentation (EI) system into an on-board and an off-board component. An on-board component is responsible for sampling, pre-processing, buffering, and transmitting data to the off-board component. The latter is responsible for aggregating, post-processing, and storing test data as well as providing access to the data via a clearly defined interface including such aspects as security, user authentication and access control. The power of the EISA architecture approach is in its inherent ability to support virtual instrumentation as well as enabling interoperability with such important T&E systems as Integrated Network-Enhanced Telemetry (iNET), Test and Training Enabling Architecture (TENA) and other relevant Department of Defense initiatives.

  10. System Building and Architecture.

    ERIC Educational Resources Information Center

    Robbie, Roderick G.

    The technical director of the Metropolitan Toronto School Boards Study of Educational Facilities (SEF) presents a description of the general theory and execution of the first SEF building system, and his views on the general principles of system building as they might affect architecture and the economy. (TC)

  11. Making Connections through Architecture.

    ERIC Educational Resources Information Center

    Hollingsworth, Patricia

    1993-01-01

    The Center for Arts and Sciences (Oklahoma) developed an interdisciplinary curriculum for disadvantaged gifted children on styles of architecture, called "Discovering Patterns in the Built Environment." This article describes the content and processes used in the curriculum, as well as other programs of the center, such as teacher workshops,…

  12. GNU debugger internal architecture

    SciTech Connect

    Miller, P.; Nessett, D.; Pizzi, R.

    1993-12-16

    This document describes the internal and architecture and implementation of the GNU debugger, gdb. Topics include inferior process management, command execution, symbol table management and remote debugging. Call graphs for specific functions are supplied. This document is not a complete description but offers a developer an overview which is the place to start before modification.

  13. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  14. INL Generic Robot Architecture

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  15. A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Cwik, T.

    1994-01-01

    A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.

  16. India's Vernacular Architecture as a Reflection of Culture.

    ERIC Educational Resources Information Center

    Masalski, Kathleen Woods

    This paper contains the narrative for a slide presentation on the architecture of India. Through the narration, the geography and climate of the country and the social conditions of the Indian people are discussed. Roofs and windows are adapted for the hot, rainy climate, while the availability of building materials ranges from palm leaves to mud…

  17. Human Symbol Manipulation within an Integrated Cognitive Architecture

    ERIC Educational Resources Information Center

    Anderson, John R.

    2005-01-01

    This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children…

  18. Commanding Constellations (Pipeline Architecture)

    NASA Technical Reports Server (NTRS)

    Ray, Tim; Condron, Jeff

    2003-01-01

    Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.

  19. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  20. 11. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster General's Office Standard Plan 82, sheet 1. Lithograph on linen architectural drawing. April 1893 3 ELEVATIONS, 3 PLANS AND A PARTIAL SECTION - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  1. 12. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster Generals Office Standard Plan 82, sheet 2, April 1893. Lithograph on linen architectural drawing. DETAILS - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  2. An Experiment in Architectural Instruction.

    ERIC Educational Resources Information Center

    Dvorak, Robert W.

    1978-01-01

    Discusses the application of the PLATO IV computer-based educational system to a one-semester basic drawing course for freshman architecture, landscape architecture, and interior design students and relates student reactions to the experience. (RAO)

  3. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  4. An intelligent CNC machine control system architecture

    SciTech Connect

    Miller, D.J.; Loucks, C.S.

    1996-10-01

    Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications using platform-independent software.

  5. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  6. Controlling Material Reactivity Using Architecture.

    PubMed

    Sullivan, Kyle T; Zhu, Cheng; Duoss, Eric B; Gash, Alexander E; Kolesky, David B; Kuntz, Joshua D; Lewis, Jennifer A; Spadaccini, Christopher M

    2016-03-01

    3D-printing methods are used to generate reactive material architectures. Several geometric parameters are observed to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. The architecture offers an additional route to control, at will, the energy release rate in reactive composite materials. PMID:26669517

  7. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  8. Cognitive Architectures for Multimedia Learning

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  9. Adaptive-array Electron Cyclotron Emission diagnostics using data streaming in a Software Defined Radio system

    NASA Astrophysics Data System (ADS)

    Idei, H.; Mishra, K.; Yamamoto, M. K.; Hamasaki, M.; Fujisawa, A.; Nagashima, Y.; Hayashi, Y.; Onchi, T.; Hanada, K.; Zushi, H.; the QUEST team

    2016-04-01

    Measurement of the Electron Cyclotron Emission (ECE) spectrum is one of the most popular electron temperature diagnostics in nuclear fusion plasma research. A 2-dimensional ECE imaging system was developed with an adaptive-array approach. A radio-frequency (RF) heterodyne detection system with Software Defined Radio (SDR) devices and a phased-array receiver antenna was used to measure the phase and amplitude of the ECE wave. The SDR heterodyne system could continuously measure the phase and amplitude with sufficient accuracy and time resolution while the previous digitizer system could only acquire data at specific times. Robust streaming phase measurements for adaptive-arrayed continuous ECE diagnostics were demonstrated using Fast Fourier Transform (FFT) analysis with the SDR system. The emission field pattern was reconstructed using adaptive-array analysis. The reconstructed profiles were discussed using profiles calculated from coherent single-frequency radiation from the phase array antenna.

  10. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  11. Open architecture CNC system

    SciTech Connect

    Tal, J.; Lopez, A.; Edwards, J.M.

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool in a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.

  12. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  13. Instrumented Architectural Simulation System

    NASA Technical Reports Server (NTRS)

    Delagi, B. A.; Saraiya, N.; Nishimura, S.; Byrd, G.

    1987-01-01

    Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies, not just toy problems or small application fragements, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.

  14. Generic robot architecture

    SciTech Connect

    Bruemmer, David J; Few, Douglas A

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  15. Aerobot Autonomy Architecture

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  16. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  17. Staged Event Architecture

    2005-05-30

    Sea is a framework for a Staged Event Architecture, designed around non-blocking asynchronous communication facilities that are decoupled from the threading model chosen by any given application, Components for P networking and in-memory communication are provided. The Sea Java library encapsulates these concepts. Sea is used to easily build efficient and flexible low-level network clients and servers, and in particular as a basic communication substrate for Peer-to-Peer applications.

  18. Information systems definition architecture

    SciTech Connect

    Calapristi, A.J.

    1996-06-20

    The Tank Waste Remediation System (TWRS) Information Systems Definition architecture evaluated information Management (IM) processes in several key organizations. The intent of the study is to identify improvements in TWRS IM processes that will enable better support to the TWRS mission, and accommodate changes in TWRS business environment. The ultimate goals of the study are to reduce IM costs, Manage the configuration of TWRS IM elements, and improve IM-related process performance.

  19. Avionics Architecture Modelling Language

    NASA Astrophysics Data System (ADS)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  20. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  1. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  2. Architecture of Chinese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Cui, Chen-Zhou; Zhao, Yong-Heng

    2004-06-01

    Virtual Observatory (VO) is brought forward under the background of progresses of astronomical technologies and information technologies. VO architecture design embodies the combination of above two technologies. As an introduction of VO, principle and workflow of Virtual Observatory are given firstly. Then the latest progress on VO architecture is introduced. Based on the Grid technology, layered architecture model and service-oriented architecture model are given for Chinese Virtual Observatory. In the last part of the paper, some problems on architecture design are discussed in detail.

  3. Three-dimensional elastic wave modeling using a CG-FFT approach to the solution of a contrast-source stress-velocity integral-equation formulation

    NASA Astrophysics Data System (ADS)

    Yang, J.; Abubakar, A.

    2012-12-01

    The ability to accurately and efficiently simulate elastic wave scattering processes is very important in geophysical prospecting applications. A recently proposed formulation of an integral equation for solving three-dimensional elastic wave scattering problems is numerically implemented. The approach is formulated in terms of the stress tensor and particle velocity vector, where the symmetric tensors of rank two are decomposed into their omnidirectional and deviatoric constituents. Subsequently, this integral equation is used to obtain a contrast-source type integral equation. For solving these integral equations we employ a Conjugate Gradient Fast Fourier Transform (CG-FFT) scheme, which is based on quadrature formulas that provide (second-order) accurate approximations while retaining the convolution nature of the relevant integrals that make them amenable to efficient evaluation via Fast Fourier Transforms. As linear solvers we employ the Conjugate Gradient for Normal Residual (CGNR) scheme, which is always monotonically convergent, but has a slow convergent rate, and the Bi-Conjugate Gradient Stabilized (BiCGSTAB) scheme, which is more efficient, but it is less stable. The convergence rates of iterative schemes are further improved through the use of a simple diagonal preconditioner. We show a number of numerical results that demonstrate the accuracy and efficiency of the implemented 3D elastic modeling approach. Numerical models include both simple synthetic models and classic seismic test models (such as the SEG/EAGE salt model and the Marmousi2 model). Excellent benchmark results against a Finite Difference Time Domain (FDTD) algorithm are also presented. These features suggest that the present numerical scheme may provide the basis for the so-called contrast-source inversion method.

  4. The path to adaptive microsystems

    NASA Astrophysics Data System (ADS)

    Zolper, John C.; Biercuk, Michael J.

    2006-05-01

    Scaling trends in microsystems are discussed frequently in the technical community, providing a short-term perspective on the future of integrated microsystems. This paper looks beyond the leading edge of technological development, focusing on new microsystem design paradigms that move far beyond today's systems based on static components. We introduce the concept of Adaptive Microsystems and outline a path to realizing these systems-on-a-chip. The role of DARPA in advancing future components and systems research is discussed, and specific DARPA efforts enabling and producing adaptive microsystems are presented. In particular, we discuss efforts underway in the DARPA Microsystems Technology Office (MTO) including programs in novel circuit architectures (3DIC), adaptive imaging and sensing (AFPA, VISA, MONTAGE, A-to-I) and reconfigurable RF/Microwave devices (SMART, TFAST, IRFFE).

  5. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    NASA Astrophysics Data System (ADS)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  6. Teacher Adaptation to Open Learning Spaces

    ERIC Educational Resources Information Center

    Alterator, Scott; Deed, Craig

    2013-01-01

    The "open classroom" emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to…

  7. The Elements Of Adaptive Neural Expert Systems

    NASA Astrophysics Data System (ADS)

    Healy, Michael J.

    1989-03-01

    The generalization properties of a class of neural architectures can be modelled mathematically. The model is a parallel predicate calculus based on pattern recognition and self-organization of long-term memory in a neural network. It may provide the basis for adaptive expert systems capable of inductive learning and rapid processing in a highly complex and changing environment.

  8. "Unwalling" the Classroom: Teacher Reaction and Adaptation

    ERIC Educational Resources Information Center

    Deed, Craig; Lesko, Thomas

    2015-01-01

    Modern open school architecture abstractly expresses ideas about choice, flexibility and autonomy. While open spaces express and authorise different teaching practice, these versions of school and classrooms present challenges to teaching routines and practice. This paper examines how teachers adapt as they move into new school buildings designed…

  9. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  10. Cognitive Architectures and Autonomy: A Comparative Review

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn; Helgasson, Helgi

    2012-05-01

    One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.

  11. Architectures for Time-domain Astronomy

    NASA Astrophysics Data System (ADS)

    Seaman, R.; Allan, A.; Pierfederici, F.; Williams, R.

    2009-09-01

    Wonder at the changing sky predates recorded history. Empirical studies of time-varying celestial phenomena date back to Galileo and Tycho. Telegrams conveying news of transient and recurrent events have been key astronomical infrastructure since the nineteenth century. Recent micro-lensing, supernova and gamma-ray burst studies have lead to a succession of exciting discoveries, but massive new time-domain surveys will soon overwhelm our nineteenth century transient response technologies. Meeting this challenge demands new autonomous architectures for astronomy. These Architectures should reach from proposing new research, through experimental design and the scheduling of telescope operations, to the archiving and pipeline-processing of data to discover new transients, to the publishing of these events, through automated follow-up via robotic and ToO assets, and to the display and analysis of observational results. All will lead to adaptive adjustment of time-domain investigations. The IVOA VOEvent protocol provides an engine for purpose-built astronomical architectures.

  12. Fault-tolerant architectures for superconducting qubits

    NASA Astrophysics Data System (ADS)

    DiVincenzo, David P.

    2009-12-01

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'—there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  13. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  14. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  15. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  16. Emerging hierarchies in dynamically adapting webs

    NASA Astrophysics Data System (ADS)

    Katifori, Eleni; Graewer, Johannes; Magnasco, Marcelo; Modes, Carl

    Transport networks play a key role across four realms of eukaryotic life: slime molds, fungi, plants, and animals. In addition to the developmental algorithms that build them, many also employ adaptive strategies to respond to stimuli, damage, and other environmental changes. We model these adapting network architectures using a generic dynamical system on weighted graphs and find in simulation that these networks ultimately develop a hierarchical organization of the final weighted architecture accompanied by the formation of a system-spanning backbone. We quantify the hierarchical organization of the networks by developing an algorithm that decomposes the architecture to multiple scales and analyzes how the organization in each scale relates to that of the scale above and below it. The methodologies developed in this work are applicable to a wide range of systems including the slime mold physarum polycephalum, human microvasculature, and force chains in granular media.

  17. Architectural Lessons: Look Back In Order To Move Forward

    NASA Astrophysics Data System (ADS)

    Huang, T.; Djorgovski, S. G.; Caltagirone, S.; Crichton, D. J.; Hughes, J. S.; Law, E.; Pilone, D.; Pilone, T.; Mahabal, A.

    2015-12-01

    True elegance of scalable and adaptable architecture is not about incorporating the latest and greatest technologies. Its elegance is measured by its ability to scale and adapt as its operating environment evolves over time. Architecture is the link that bridges people, process, policies, interfaces, and technologies. Architectural development begins by observe the relationships which really matter to the problem domain. It follows by the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use [C. Alexander, 1979]. Architects are the true artists. Like all masterpieces, the values and strength of architectures are measured not by the volumes of publications, it is measured by its ability to evolve. An architect must look back in order to move forward. This talk discusses some of the prior works including onboard data analysis system, knowledgebase system, cloud-based Big Data platform, as enablers to help shape the new generation of Earth Science projects at NASA and EarthCube where a community-driven architecture is the key to enable data-intensive science. [C. Alexander, The Timeless Way of Building, Oxford University, 1979.

  18. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  19. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  20. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  1. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In

  2. Efficient Algorithm and Architecture of Critical-Band Transform for Low-Power Speech Applications

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Gan, Woon-Seng

    2007-12-01

    An efficient algorithm and its corresponding VLSI architecture for the critical-band transform (CBT) are developed to approximate the critical-band filtering of the human ear. The CBT consists of a constant-bandwidth transform in the lower frequency range and a Brown constant-[InlineEquation not available: see fulltext.] transform (CQT) in the higher frequency range. The corresponding VLSI architecture is proposed to achieve significant power efficiency by reducing the computational complexity, using pipeline and parallel processing, and applying the supply voltage scaling technique. A 21-band Bark scale CBT processor with a sampling rate of 16 kHz is designed and simulated. Simulation results verify its suitability for performing short-time spectral analysis on speech. It has a better fitting on the human ear critical-band analysis, significantly fewer computations, and therefore is more energy-efficient than other methods. With a 0.35[InlineEquation not available: see fulltext.]m CMOS technology, it calculates a 160-point speech in 4.99 milliseconds at 234 kHz. The power dissipation is 15.6[InlineEquation not available: see fulltext.]W at 1.1 V. It achieves 82.1[InlineEquation not available: see fulltext.] power reduction as compared to a benchmark 256-point FFT processor.

  3. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  4. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  5. Mind and Language Architecture

    PubMed Central

    Logan, Robert K

    2010-01-01

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described. PMID:20922045

  6. Architecture, constraints, and behavior

    PubMed Central

    Doyle, John C.; Csete, Marie

    2011-01-01

    This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems. PMID:21788505

  7. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  8. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  9. Etruscan Divination and Architecture

    NASA Astrophysics Data System (ADS)

    Magli, Giulio

    The Etruscan religion was characterized by divination methods, aimed at interpreting the will of the gods. These methods were revealed by the gods themselves and written in the books of the Etrusca Disciplina. The books are lost, but parts of them are preserved in the accounts of later Latin sources. According to such traditions divination was tightly connected with the Etruscan cosmovision of a Pantheon distributed in equally spaced, specific sectors of the celestial realm. We explore here the possible reflections of such issues in the Etruscan architectural remains.

  10. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  11. Architecture for robot intelligence

    NASA Technical Reports Server (NTRS)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  12. Detection Algorithms: FFT vs. KLT

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.

  13. Adaptive Attitude Control of the Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Muse, Jonathan

    2010-01-01

    An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.

  14. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, P.C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  15. Architectures of small satellite programs in developing countries

    NASA Astrophysics Data System (ADS)

    Wood, Danielle; Weigel, Annalisa

    2014-04-01

    Global participation in space activity is growing as satellite technology matures and spreads. Countries in Africa, Asia and Latin America are creating or reinvigorating national satellite programs. These countries are building local capability in space through technological learning. This paper analyzes implementation approaches in small satellite programs within developing countries. The study addresses diverse examples of approaches used to master, adapt, diffuse and apply satellite technology in emerging countries. The work focuses on government programs that represent the nation and deliver services that provide public goods such as environmental monitoring. An original framework developed by the authors examines implementation approaches and contextual factors using the concept of Systems Architecture. The Systems Architecture analysis defines the satellite programs as systems within a context which execute functions via forms in order to achieve stakeholder objectives. These Systems Architecture definitions are applied to case studies of six satellite projects executed by countries in Africa and Asia. The architectural models used by these countries in various projects reveal patterns in the areas of training, technical specifications and partnership style. Based on these patterns, three Archetypal Project Architectures are defined which link the contextual factors to the implementation approaches. The three Archetypal Project Architectures lead to distinct opportunities for training, capability building and end user services.

  16. Adapting Animals.

    ERIC Educational Resources Information Center

    Wedman, John; Wedman, Judy

    1985-01-01

    The "Animals" program found on the Apple II and IIe system master disk can be adapted for use in the mathematics classroom. Instructions for making the necessary changes and suggestions for using it in lessons related to geometric shapes are provided. (JN)

  17. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  18. Adaptive homeostasis.

    PubMed

    Davies, Kelvin J A

    2016-06-01

    Homeostasis is a central pillar of modern Physiology. The term homeostasis was invented by Walter Bradford Cannon in an attempt to extend and codify the principle of 'milieu intérieur,' or a constant interior bodily environment, that had previously been postulated by Claude Bernard. Clearly, 'milieu intérieur' and homeostasis have served us well for over a century. Nevertheless, research on signal transduction systems that regulate gene expression, or that cause biochemical alterations to existing enzymes, in response to external and internal stimuli, makes it clear that biological systems are continuously making short-term adaptations both to set-points, and to the range of 'normal' capacity. These transient adaptations typically occur in response to relatively mild changes in conditions, to programs of exercise training, or to sub-toxic, non-damaging levels of chemical agents; thus, the terms hormesis, heterostasis, and allostasis are not accurate descriptors. Therefore, an operational adjustment to our understanding of homeostasis suggests that the modified term, Adaptive Homeostasis, may be useful especially in studies of stress, toxicology, disease, and aging. Adaptive Homeostasis may be defined as follows: 'The transient expansion or contraction of the homeostatic range in response to exposure to sub-toxic, non-damaging, signaling molecules or events, or the removal or cessation of such molecules or events.' PMID:27112802

  19. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  20. Autonomous droplet architectures.

    PubMed

    Jones, Gareth; King, Philip H; Morgan, Hywel; de Planque, Maurits R R; Zauner, Klaus-Peter

    2015-01-01

    The quintessential living element of all organisms is the cell-a fluid-filled compartment enclosed, but not isolated, by a layer of amphiphilic molecules that self-assemble at its boundary. Cells of different composition can aggregate and communicate through the exchange of molecules across their boundaries. The astounding success of this architecture is readily apparent throughout the biological world. Inspired by the versatility of nature's architecture, we investigate aggregates of membrane-enclosed droplets as a design concept for robotics. This will require droplets capable of sensing, information processing, and actuation. It will also require the integration of functionally specialized droplets into an interconnected functional unit. Based on results from the literature and from our own laboratory, we argue the viability of this approach. Sensing and information processing in droplets have been the subject of several recent studies, on which we draw. Integrating droplets into coherently acting units and the aspect of controlled actuation for locomotion have received less attention. This article describes experiments that address both of these challenges. Using lipid-coated droplets of Belousov-Zhabotinsky reaction medium in oil, we show here that such droplets can be integrated and that chemically driven mechanical motion can be achieved. PMID:25622015

  1. Modularity and mental architecture.

    PubMed

    Robbins, Philip

    2013-11-01

    Debates about the modularity of cognitive architecture have been ongoing for at least the past three decades, since the publication of Fodor's landmark book The Modularity of Mind. According to Fodor, modularity is essentially tied to informational encapsulation, and as such is only found in the relatively low-level cognitive systems responsible for perception and language. According to Fodor's critics in the evolutionary psychology camp, modularity simply reflects the fine-grained functional specialization dictated by natural selection, and it characterizes virtually all aspects of cognitive architecture, including high-level systems for judgment, decision making, and reasoning. Though both of these perspectives on modularity have garnered support, the current state of evidence and argument suggests that a broader skepticism about modularity may be warranted. WIREs Cogn Sci 2013, 4:641-649. doi: 10.1002/wcs.1255 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304269

  2. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  3. Rutger's CAM2000 chip architecture

    NASA Technical Reports Server (NTRS)

    Smith, Donald E.; Hall, J. Storrs; Miyake, Keith

    1993-01-01

    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set.

  4. Demand Activated Manufacturing Architecture

    SciTech Connect

    Bender, T.R.; Zimmerman, J.J.

    2001-02-07

    Honeywell Federal Manufacturing & Technologies (FM&T) engineers John Zimmerman and Tom Bender directed separate projects within this CRADA. This Project Accomplishments Summary contains their reports independently. Zimmerman: In 1998 Honeywell FM&T partnered with the Demand Activated Manufacturing Architecture (DAMA) Cooperative Business Management Program to pilot the Supply Chain Integration Planning Prototype (SCIP). At the time, FM&T was developing an enterprise-wide supply chain management prototype called the Integrated Programmatic Scheduling System (IPSS) to improve the DOE's Nuclear Weapons Complex (NWC) supply chain. In the CRADA partnership, FM&T provided the IPSS technical and business infrastructure as a test bed for SCIP technology, and this would provide FM&T the opportunity to evaluate SCIP as the central schedule engine and decision support tool for IPSS. FM&T agreed to do the bulk of the work for piloting SCIP. In support of that aim, DAMA needed specific DOE Defense Programs opportunities to prove the value of its supply chain architecture and tools. In this partnership, FM&T teamed with Sandia National Labs (SNL), Division 6534, the other DAMA partner and developer of SCIP. FM&T tested SCIP in 1998 and 1999. Testing ended in 1999 when DAMA CRADA funding for FM&T ceased. Before entering the partnership, FM&T discovered that the DAMA SCIP technology had an array of applications in strategic, tactical, and operational planning and scheduling. At the time, FM&T planned to improve its supply chain performance by modernizing the NWC-wide planning and scheduling business processes and tools. The modernization took the form of a distributed client-server planning and scheduling system (IPSS) for planners and schedulers to use throughout the NWC on desktops through an off-the-shelf WEB browser. The planning and scheduling process within the NWC then, and today, is a labor-intensive paper-based method that plans and schedules more than 8,000 shipped parts

  5. Improving nonlinear modeling capabilities of functional link adaptive filters.

    PubMed

    Comminiello, Danilo; Scarpiniti, Michele; Scardapane, Simone; Parisi, Raffaele; Uncini, Aurelio

    2015-09-01

    The functional link adaptive filter (FLAF) represents an effective solution for online nonlinear modeling problems. In this paper, we take into account a FLAF-based architecture, which separates the adaptation of linear and nonlinear elements, and we focus on the nonlinear branch to improve the modeling performance. In particular, we propose a new model that involves an adaptive combination of filters downstream of the nonlinear expansion. Such combination leads to a cooperative behavior of the whole architecture, thus yielding a performance improvement, particularly in the presence of strong nonlinearities. An advanced architecture is also proposed involving the adaptive combination of multiple filters on the nonlinear branch. The proposed models are assessed in different nonlinear modeling problems, in which their effectiveness and capabilities are shown. PMID:26057613

  6. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  7. Full-Scale Flight Research Testbeds: Adaptive and Intelligent Control

    NASA Technical Reports Server (NTRS)

    Pahle, Joe W.

    2008-01-01

    This viewgraph presentation describes the adaptive and intelligent control methods used for aircraft survival. The contents include: 1) Motivation for Adaptive Control; 2) Integrated Resilient Aircraft Control Project; 3) Full-scale Flight Assets in Use for IRAC; 4) NASA NF-15B Tail Number 837; 5) Gen II Direct Adaptive Control Architecture; 6) Limited Authority System; and 7) 837 Flight Experiments. A simulated destabilization failure analysis along with experience and lessons learned are also presented.

  8. Temperature-adaptive Circuits on Reconfigurable Analog Arrays

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo S.; Keymeulen, Didier; Ramesham, Rajeshuni; Neff, Joseph; Katkoori, Srinivas

    2006-01-01

    This paper describes a new reconfigurable analog array (MA) architecture and integrated circuit (IC) used to map analog circuits that can adapt to extreme temperatures under programmable control. Algorithm-driven adaptation takes place on the RAA IC. The algorithms are implemented in a separate Field Programmable Gate Array (FPGA) IC, co-located with the RAA in the extreme temperature environment. The experiments demonstrate circuit adaptation over a wide temperature range, from extremely low temperature of -180 C to high 120 C.

  9. 9. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) Annotated lithograph on paper. Standard plan used for construction of Commissary Sergeants Quarters, 1876. PLAN, FRONT AND SIDE ELEVATIONS, SECTION - Fort Myer, Commissary Sergeant's Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  10. The Architecture of Exoplanets

    NASA Astrophysics Data System (ADS)

    Hatzes, Artie P.

    2016-05-01

    Prior to the discovery of exoplanets our expectations of their architecture were largely driven by the properties of our solar system. We expected giant planets to lie in the outer regions and rocky planets in the inner regions. Planets should probably only occupy orbital distances 0.3-30 AU from the star. Planetary orbits should be circular, prograde and in the same plane. The reality of exoplanets have shattered these expectations. Jupiter-mass, Neptune-mass, Superearths, and even Earth-mass planets can orbit within 0.05 AU of the stars, sometimes with orbital periods of less than one day. Exoplanetary orbits can be eccentric, misaligned, and even in retrograde orbits. Radial velocity surveys gave the first hints that the occurrence rate increases with decreasing mass. This was put on a firm statistical basis with the Kepler mission that clearly demonstrated that there were more Neptune- and Superearth-sized planets than Jupiter-sized planets. These are often in multiple, densely packed systems where the planets all orbit within 0.3 AU of the star, a result also suggested by radial velocity surveys. Exoplanets also exhibit diversity along the main sequence. Massive stars tend to have a higher frequency of planets ( ≈ 20-25 %) that tend to be more massive ( M≈ 5-10 M_{Jup}). Giant planets around low mass stars are rare, but these stars show an abundance of small (Neptune and Superearth) planets in multiple systems. Planet formation is also not restricted to single stars as the Kepler mission has discovered several circumbinary planets. Although we have learned much about the architecture of planets over the past 20 years, we know little about the census of small planets at relatively large ( a>1 AU) orbital distances. We have yet to find a planetary system that is analogous to our own solar system. The question of how unique are the properties of our own solar system remains unanswered. Advancements in the detection methods of small planets over a wide range

  11. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  12. Functional Biomimetic Architectures

    NASA Astrophysics Data System (ADS)

    Levine, Paul M.

    N-substituted glycine oligomers, or 'peptoids,' are a class of sequence--specific foldamers composed of tertiary amide linkages, engendering proteolytic stability and enhanced cellular permeability. Peptoids are notable for their facile synthesis, sequence diversity, and ability to fold into distinct secondary structures. In an effort to establish new functional peptoid architectures, we utilize the copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC) reaction to generate peptidomimetic assemblies bearing bioactive ligands that specifically target and modulate Androgen Receptor (AR) activity, a major therapeutic target for prostate cancer. Additionally, we explore chemical ligation protocols to generate semi-synthetic hybrid biomacromolecules capable of exhibiting novel structures and functions not accessible to fully biosynthesized proteins.

  13. CONRAD Software Architecture

    NASA Astrophysics Data System (ADS)

    Guzman, J. C.; Bennett, T.

    2008-08-01

    The Convergent Radio Astronomy Demonstrator (CONRAD) is a collaboration between the computing teams of two SKA pathfinder instruments, MeerKAT (South Africa) and ASKAP (Australia). Our goal is to produce the required common software to operate, process and store the data from the two instruments. Both instruments are synthesis arrays composed of a large number of antennas (40 - 100) operating at centimeter wavelengths with wide-field capabilities. Key challenges are the processing of high volume of data in real-time as well as the remote mode of operations. Here we present the software architecture for CONRAD. Our design approach is to maximize the use of open solutions and third-party software widely deployed in commercial applications, such as SNMP and LDAP, and to utilize modern web-based technologies for the user interfaces, such as AJAX.

  14. Naval open systems architecture

    NASA Astrophysics Data System (ADS)

    Guertin, Nick; Womble, Brian; Haskell, Virginia

    2013-05-01

    For the past 8 years, the Navy has been working on transforming the acquisition practices of the Navy and Marine Corps toward Open Systems Architectures to open up our business, gain competitive advantage, improve warfighter performance, speed innovation to the fleet and deliver superior capability to the warfighter within a shrinking budget1. Why should Industry care? They should care because we in Government want the best Industry has to offer. Industry is in the business of pushing technology to greater and greater capabilities through innovation. Examples of innovations are on full display at this conference, such as exploring the impact of difficult environmental conditions on technical performance. Industry is creating the tools which will continue to give the Navy and Marine Corps important tactical advantages over our adversaries.

  15. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  16. Power Systems Control Architecture

    SciTech Connect

    James Davidson

    2005-01-01

    A diagram provided in the report depicts the complexity of the power systems control architecture used by the national power structure. It shows the structural hierarchy and the relationship of the each system to those other systems interconnected to it. Each of these levels provides a different focus for vulnerability testing and has its own weaknesses. In evaluating each level, of prime concern is what vulnerabilities exist that provide a path into the system, either to cause the system to malfunction or to take control of a field device. An additional vulnerability to consider is can the system be compromised in such a manner that the attacker can obtain critical information about the system and the portion of the national power structure that it controls.

  17. MSAT network architecture

    NASA Technical Reports Server (NTRS)

    Davies, N. G.; Skerry, B.

    1990-01-01

    The Mobile Satellite (MSAT) communications system will support mobile voice and data services using circuit switched and packet switched facilities with interconnection to the public switched telephone network and private networks. Control of the satellite network will reside in a Network Control System (NCS) which is being designed to be extremely flexible to provide for the operation of the system initially with one multi-beam satellite, but with capability to add additional satellites which may have other beam configurations. The architecture of the NCS is described. The signalling system must be capable of supporting the protocols for the assignment of circuits for mobile public telephone and private network calls as well as identifying packet data networks. The structure of a straw-man signalling system is discussed.

  18. Connector adapter

    NASA Technical Reports Server (NTRS)

    Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  19. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  20. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  1. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    NASA Astrophysics Data System (ADS)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  2. FRACSAT: Automated design synthesis for future space architectures

    NASA Astrophysics Data System (ADS)

    Mackey, R.; Uckun, S.; Do, Minh; Shah, J.

    This paper describes the algorithmic basis and development of FRACSAT (FRACtionated Spacecraft Architecture Toolkit), a new approach to conceptual design, cost-benefit analysis, and detailed trade studies for space systems. It provides an automated capability for exploration of candidate spacecraft architectures, leading users to near-optimal solutions with respect to user-defined requirements, risks, and program uncertainties. FRACSAT utilizes a sophisticated planning algorithm (PlanVisioner) to perform a quasi-exhaustive search for candidate architectures, constructing candidates from an extensible model-based representation of space system components and functions. These candidates are then evaluated with emphasis on the business case, computing the expected design utility and system costs as well as risk, presenting the user with a greatly reduced selection of candidates. The user may further refine the search according to cost or benefit uncertainty, adaptability, or other performance metrics as needed.

  3. An adaptive strategy of classification for detecting hypoglycemia using only two EEG channels.

    PubMed

    Nguyen, Lien B; Nguyen, Anh V; Ling, Sai Ho; Nguyen, Hung T

    2012-01-01

    Hypoglycemia is the most common but highly feared side effect of the insulin therapy for patients with Type 1 Diabetes Mellitus (T1DM). Severe episodes of hypoglycemia can lead to unconsciousness, coma, and even death. The variety of hypoglycemic symptoms arises from the activation of the autonomous central nervous system and from reduced cerebral glucose consumption. In this study, electroencephalography (EEG) signals from five T1DM patients during an overnight clamp study were measured and analyzed. By applying a method of feature extraction using Fast Fourier Transform (FFT) and classification using neural networks, we establish that hypoglycemia can be detected non-invasively using EEG signals from only two channels. This paper demonstrates that a significant advantage can be achieved by implementing adaptive training. By adapting the classifier to a previously unseen person, the classification results can be improved from 60% sensitivity and 54% specificity to 75% sensitivity and 67% specificity. PMID:23366685

  4. Secure Storage Architectures

    SciTech Connect

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine; Koch, Scott M; Naughton, III, Thomas J; Pogge, James R; Scott, Stephen L; Shipman, Galen M; Sorrillo, Lawrence

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  5. Adaptive antennas

    NASA Astrophysics Data System (ADS)

    Barton, P.

    1987-04-01

    The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.

  6. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  7. SpaceWire Architectures: Present and Future

    NASA Technical Reports Server (NTRS)

    Rakow, Glen Parker

    2006-01-01

    A viewgraph presentation on current and future spacewire architectures is shown. The topics include: 1) Current Spacewire Architectures: Swift Data Flow; 2) Current SpaceWire Architectures : LRO Data Flow; 3) Current Spacewire Architectures: JWST Data Flow; 4) Current SpaceWire Architectures; 5) Traditional Systems; 6) Future Systems; 7) Advantages; and 8) System Engineer Toolkit.

  8. Software Architecture for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Shih, Jimmy S.

    1997-01-01

    The thesis objective is to design an autonomous spacecraft architecture to perform both deliberative and reactive behaviors. The Autonomous Small Planet In-Situ Reaction to Events (ASPIRE) project uses the architecture to integrate several autonomous technologies for a comet orbiter mission.

  9. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  10. Perspectives on Architecture and Children.

    ERIC Educational Resources Information Center

    Taylor, Anne

    1989-01-01

    Describes a new system for teaching architectural education known as Architectural Design Education. States that this system, developed by Anne Taylor and George Vlastos, introduces students to the problem solving process, integrates creative activities with traditional disciplines, and enhances students' and teachers' ability to relate to their…

  11. Dataflow architecture for machine control

    SciTech Connect

    Lent, B.

    1989-01-01

    The author describes how to implement the latest control strategies using state-of-the-art control technology and computing principles. Provides all the basic definitions, taxonomy, and analysis of currently used architectures, including microprocessor communication schemes. This book describes in detail the analysis and implementation of the selected OR dataflow driven architecture in a grinding machine control system.

  12. Interior Design in Architectural Education

    ERIC Educational Resources Information Center

    Gurel, Meltem O.; Potthoff, Joy K.

    2006-01-01

    The domain of interiors constitutes a point of tension between practicing architects and interior designers. Design of interior spaces is a significant part of architectural profession. Yet, to what extent does architectural education keep pace with changing demands in rendering topics that are identified as pertinent to the design of interiors?…

  13. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware. PMID:22356955

  14. A neuro-fuzzy architecture for real-time applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song

    1992-01-01

    Neural networks and fuzzy expert systems perform the same task of functional mapping using entirely different approaches. Each approach has certain unique features. The ability to learn specific input-output mappings from large input/output data possibly corrupted by noise and the ability to adapt or continue learning are some important features of neural networks. Fuzzy expert systems are known for their ability to deal with fuzzy information and incomplete/imprecise data in a structured, logical way. Since both of these techniques implement the same task (that of functional mapping--we regard 'inferencing' as one specific category under this class), a fusion of the two concepts that retains their unique features while overcoming their individual drawbacks will have excellent applications in the real world. In this paper, we arrive at a new architecture by fusing the two concepts. The architecture has the trainability/adaptibility (based on input/output observations) property of the neural networks and the architectural features that are unique to fuzzy expert systems. It also does not require specific information such as fuzzy rules, defuzzification procedure used, etc., though any such information can be integrated into the architecture. We show that this architecture can provide better performance than is possible from a single two or three layer feedforward neural network. Further, we show that this new architecture can be used as an efficient vehicle for hardware implementation of complex fuzzy expert systems for real-time applications. A numerical example is provided to show the potential of this approach.

  15. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue

  16. Mission Architecture Comparison for Human Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Geffre, Jim; Robertson, Ed; Lenius, Jon

    2006-01-01

    The Vision for Space Exploration outlines a bold new national space exploration policy that holds as one of its primary objectives the extension of human presence outward into the Solar System, starting with a return to the Moon in preparation for the future exploration of Mars and beyond. The National Aeronautics and Space Administration is currently engaged in several preliminary analysis efforts in order to develop the requirements necessary for implementing this objective in a manner that is both sustainable and affordable. Such analyses investigate various operational concepts, or mission architectures , by which humans can best travel to the lunar surface, live and work there for increasing lengths of time, and then return to Earth. This paper reports on a trade study conducted in support of NASA s Exploration Systems Mission Directorate investigating the relative merits of three alternative lunar mission architecture strategies. The three architectures use for reference a lunar exploration campaign consisting of multiple 90-day expeditions to the Moon s polar regions, a strategy which was selected for its high perceived scientific and operational value. The first architecture discussed incorporates the lunar orbit rendezvous approach employed by the Apollo lunar exploration program. This concept has been adapted from Apollo to meet the particular demands of a long-stay polar exploration campaign while assuring the safe return of crew to Earth. Lunar orbit rendezvous is also used as the baseline against which the other alternate concepts are measured. The first such alternative, libration point rendezvous, utilizes the unique characteristics of the cislunar libration point instead of a low altitude lunar parking orbit as a rendezvous and staging node. Finally, a mission strategy which does not incorporate rendezvous after the crew ascends from the Moon is also studied. In this mission strategy, the crew returns directly to Earth from the lunar surface, and is

  17. Strategic Adaptation of SCA for STRS

    NASA Technical Reports Server (NTRS)

    Quinn, Todd; Kacpura, Thomas

    2007-01-01

    The Space Telecommunication Radio System (STRS) architecture is being developed to provide a standard framework for future NASA space radios with greater degrees of interoperability and flexibility to meet new mission requirements. The space environment imposes unique operational requirements with restrictive size, weight, and power constraints that are significantly smaller than terrestrial-based military communication systems. With the harsh radiation environment of space, the computing and processing resources are typically one or two generations behind current terrestrial technologies. Despite these differences, there are elements of the SCA that can be adapted to facilitate the design and implementation of the STRS architecture.

  18. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    NASA Technical Reports Server (NTRS)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  19. Investigation of multigauge architectures

    SciTech Connect

    Yang, C.

    1987-01-01

    Almost every computer architect dreams of achieving high system performance with low implementation costs. A multigauge machine can reconfigure its data-path width, provide parallelism, achieve better resource utilization, and sometimes can trade computational precision for increased speed. A simple experimental method is used here to capture the main characteristics of multigauging. The measurements indicate evidence of near-optimal speedups. Adapting these ideas in designing parallel processors incurs low costs and provides flexibility. Several operational aspects of designing a multigauge machine are discussed as well. Thus, this research reports the technical, economical, and operational feasibility studies of multigauging.

  20. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and construction... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  1. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... RECEIVING FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  2. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... RECEIVING FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  3. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this

  4. Adaptive Reactive Rich Internet Applications

    NASA Astrophysics Data System (ADS)

    Schmidt, Kay-Uwe; Stühmer, Roland; Dörflinger, Jörg; Rahmani, Tirdad; Thomas, Susan; Stojanovic, Ljiljana

    Rich Internet Applications significantly raise the user experience compared with legacy page-based Web applications because of their highly responsive user interfaces. Although this is a tremendous advance, it does not solve the problem of the one-size-fits-all approach1 of current Web applications. So although Rich Internet Applications put the user in a position to interact seamlessly with the Web application, they do not adapt to the context in which the user is currently working. In this paper we address the on-the-fly personalization of Rich Internet Applications. We introduce the concept of ARRIAs: Adaptive Reactive Rich Internet Applications and elaborate on how they are able to adapt to the current working context the user is engaged in. An architecture for the ad hoc adaptation of Rich Internet Applications is presented as well as a holistic framework and tools for the realization of our on-the-fly personalization approach. We divided both the architecture and the framework into two levels: offline/design-time and online/run-time. For design-time we explain how to use ontologies in order to annotate Rich Internet Applications and how to use these annotations for conceptual Web usage mining. Furthermore, we describe how to create client-side executable rules from the semantic data mining results. We present our declarative lightweight rule language tailored to the needs of being executed directly on the client. Because of the event-driven nature of the user interfaces of Rich Internet Applications, we designed a lightweight rule language based on the event-condition-action paradigm.2 At run-time the interactions of a user are tracked directly on the client and in real-time a user model is built up. The user model then acts as input to and is evaluated by our client-side complex event processing and rule engine.

  5. Medicaid information technology architecture: an overview.

    PubMed

    Friedman, Richard H

    2006-01-01

    The Medicaid Information Technology Architecture (MITA) is a roadmap and tool-kit for States to transform their Medicaid Management Information System (MMIS) into an enterprise-wide, beneficiary-centric system. MITA will enable State Medicaid agencies to align their information technology (IT) opportunities with their evolving business needs. It also addresses long-standing issues of interoperability, adaptability, and data sharing, including clinical data, across organizational boundaries by creating models based on nationally accepted technical standards. Perhaps most significantly, MITA allows State Medicaid Programs to actively participate in the DHHS Secretary's vision of a transparent health care market that utilizes electronic health records (EHRs), ePrescribing and personal health records (PHRs). PMID:17427840

  6. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    PubMed

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent. PMID:22086739

  7. Quality analysis of requantization transcoding architectures for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Notebaert, Stijn; De Cock, Jan; De Schrijver, Davy; De Wolf, Koen; Van de Walle, Rik

    2006-08-01

    Reduction of the bitrate of video content is necessary in order to satisfy the different constraints imposed by networks and terminals. A fast and elegant solution for the reduction of the bitrate is requantization, which has been successfully applied on MPEG-2 bitstreams. Because of the improved intra prediction in the H.264/AVC specification, existing transcoding techniques are no longer suitable. In this paper we compare requantization transcoders for H.264/AVC bitstreams. The discussion is restricted to intra 4x4 macroblocks only, but the same techniques are also applicable to intra 16x16 macroblocks. Besides the open-loop transcoder and the transcoder with mode reuse, two architectures with drift compensation are described, one in the pixel domain and the other in the transform domain. Experimental results show that these architectures approach the quality of the full decode and recode architecture for low to medium bitrates. Because of the reduced computational complexity of these architectures, in particular the transform-domain compensation architecture, they are highly suitable for real-time adaptation of video content.

  8. Chromosome Architecture and Genome Organization

    PubMed Central

    Bernardi, Giorgio

    2015-01-01

    How the same DNA sequences can function in the three-dimensional architecture of interphase nucleus, fold in the very compact structure of metaphase chromosomes and go precisely back to the original interphase architecture in the following cell cycle remains an unresolved question to this day. The strategy used to address this issue was to analyze the correlations between chromosome architecture and the compositional patterns of DNA sequences spanning a size range from a few hundreds to a few thousands Kilobases. This is a critical range that encompasses isochores, interphase chromatin domains and boundaries, and chromosomal bands. The solution rests on the following key points: 1) the transition from the looped domains and sub-domains of interphase chromatin to the 30-nm fiber loops of early prophase chromosomes goes through the unfolding into an extended chromatin structure (probably a 10-nm “beads-on-a-string” structure); 2) the architectural proteins of interphase chromatin, such as CTCF and cohesin sub-units, are retained in mitosis and are part of the discontinuous protein scaffold of mitotic chromosomes; 3) the conservation of the link between architectural proteins and their binding sites on DNA through the cell cycle explains the “mitotic memory” of interphase architecture and the reversibility of the interphase to mitosis process. The results presented here also lead to a general conclusion which concerns the existence of correlations between the isochore organization of the genome and the architecture of chromosomes from interphase to metaphase. PMID:26619076

  9. A new architecture for fast ultrasound imaging

    SciTech Connect

    Cruza, J. F.; Camacho, J.; Moreno, J. M.; Medina, L.

    2014-02-18

    Some ultrasound imaging applications require high frame rate, for example 3D imaging and automated inspections of large components. Being the signal-processing throughput of the system the main bottleneck, parallel beamforming is required to achieve hundreds to thousands of images per second. Simultaneous A-scan line beamforming in all active channels is required to reach the intended high frame rate. To this purpose, a new parallel beamforming architecture that exploits the currently available processing resources available in state-of-the-art FPGAs is proposed. The work aims to get the optimal resource usage, high scalability and flexibility for different applications. To achieve these goals, the basic beamforming function is reformulated to be adapted to the DSP-cell architecture of state-of-the-art FPGAs. This allows performing simultaneous dynamic focusing on multiple A-scan lines. Some realistic examples are analyzed, evaluating resource requirements and maximum operating frequency. For example, a 128-channel system, with 128 scan lines and acquiring at 20 MSPS, can be built with 4 mid-range FPGAs, achieving up to 18000 frames per second, just limited by the maximum PRF. The gold standard Synthetic Transmit Aperture method (also called Total Focusing Method) can be carried out in real time at a processing rate of 140 high-resolution images per second (16 cm depth on steel)

  10. FPGA architecture for a videowall image processor

    NASA Astrophysics Data System (ADS)

    Skarabot, Alessandro; Ramponi, Giovanni; Buriola, Luigi

    2001-05-01

    This paper proposes an FPGA architecture for a videowall image processor. To create a videowall, a set of high resolution displays is arranged in order to present a single large image or smaller multiple images. An image processor is needed to perform the appropriate format conversion corresponding to the required output configuration, and to properly enhance the image contrast. Input signals either in the interlaced or in the progressive format must be managed. The image processor we propose is integrated into two different blocks: the first one implements the deinterlacing task for a YCbCr input video signal, then it converts the progressive YCbCr to the RGB data format and performs the optional contrast enhancement; the other one performs the format conversion of the RGB data format. Motion-adaptive vertico-temporal deinterlacing is used for the luminance signal Y; the color difference signals Cb and Cr instead are processed by means of line average deinterlacing. Image contrast enhancement is achieved via a modified Unsharp Masking technique and involves only the luminance Y. The format conversion algorithm is the bilinear interpolation technique employing the Warped Distance approach and is performed on the RGB data. Two different subblocks have been considered in the system architecture since the interpolation is performed column-wise and successively row- wise.

  11. A new architecture for fast ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Cruza, J. F.; Camacho, J.; Moreno, J. M.; Medina, L.

    2014-02-01

    Some ultrasound imaging applications require high frame rate, for example 3D imaging and automated inspections of large components. Being the signal-processing throughput of the system the main bottleneck, parallel beamforming is required to achieve hundreds to thousands of images per second. Simultaneous A-scan line beamforming in all active channels is required to reach the intended high frame rate. To this purpose, a new parallel beamforming architecture that exploits the currently available processing resources available in state-of-the-art FPGAs is proposed. The work aims to get the optimal resource usage, high scalability and flexibility for different applications. To achieve these goals, the basic beamforming function is reformulated to be adapted to the DSP-cell architecture of state-of-the-art FPGAs. This allows performing simultaneous dynamic focusing on multiple A-scan lines. Some realistic examples are analyzed, evaluating resource requirements and maximum operating frequency. For example, a 128-channel system, with 128 scan lines and acquiring at 20 MSPS, can be built with 4 mid-range FPGAs, achieving up to 18000 frames per second, just limited by the maximum PRF. The gold standard Synthetic Transmit Aperture method (also called Total Focusing Method) can be carried out in real time at a processing rate of 140 high-resolution images per second (16 cm depth on steel).

  12. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  13. Gaia Data Processing Architecture

    NASA Astrophysics Data System (ADS)

    O'Mullane, W.; Lammers, U.; Bailer-Jones, C.; Bastian, U.; Brown, A. G. A.; Drimmel, R.; Eyer, L.; Huc, C.; Katz, D.; Lindegren, L.; Pourbaix, D.; Luri, X.; Torra, J.; Mignard, F.; van Leeuwen, F.

    2007-10-01

    Gaia is the European Space Agency's (ESA's) ambitious space astrometry mission with a main objective to map astrometrically and spectro-photometrically not less than 1000 million celestial objects in our galaxy with unprecedented accuracy. The announcement of opportunity (AO) for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer to this AO. The satellite will downlink around 100 TB of raw telemetry data over a mission duration of 5--6 years. To achieve its required astrometric accuracy of a few tens of microarcseconds, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a radial-velocity spectrometer and two low-resolution dispersers for multi-color photometry. All instrument modules share a common focal plane consisting of a CCD mosaic about 1 m^2 in size and featuring close to 10^9 pixels. Each of the various instruments requires relatively complex processing while at the same time being interdependent. We describe the composition and structure of the DPAC and the envisaged overall architecture of the system. We shall delve further into the core processing---one of the nine so-called coordination units comprising the Gaia processing system.

  14. Superconducting Bolometer Array Architectures

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Chervenak, Jay; Irwin, Kent; Moseley, S. Harvey; Shafer, Rick; Staguhn, Johannes; Wollack, Ed; Oegerle, William (Technical Monitor)

    2002-01-01

    The next generation of far-infrared and submillimeter instruments require large arrays of detectors containing thousands of elements. These arrays will necessarily be multiplexed, and superconducting bolometer arrays are the most promising present prospect for these detectors. We discuss our current research into superconducting bolometer array technologies, which has recently resulted in the first multiplexed detections of submillimeter light and the first multiplexed astronomical observations. Prototype arrays containing 512 pixels are in production using the Pop-Up Detector (PUD) architecture, which can be extended easily to 1000 pixel arrays. Planar arrays of close-packed bolometers are being developed for the GBT (Green Bank Telescope) and for future space missions. For certain applications, such as a slewed far-infrared sky survey, feedhorncoupling of a large sparsely-filled array of bolometers is desirable, and is being developed using photolithographic feedhorn arrays. Individual detectors have achieved a Noise Equivalent Power (NEP) of -10(exp 17) W/square root of Hz at 300mK, but several orders of magnitude improvement are required and can be reached with existing technology. The testing of such ultralow-background detectors will prove difficult, as this requires optical loading of below IfW. Antenna-coupled bolometer designs have advantages for large format array designs at low powers due to their mode selectivity.

  15. Lunar Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Perino, Maria Antonietta

    The international space exploration plans foresee in the next decades multiple robotic and human missions to Moon and robotic missions to Mars, Phobos and other destinations. Notably the US has since the announcement of the US space exploration vision by President G. W. Bush in 2004 made significant progress in the further definition of its exploration programme focusing in the next decades in particular on human missions to Moon. Given the highly demanding nature of these missions, different initiatives have been recently taken at international level to discuss how the lunar exploration missions currently planned at national level could fit in a coordinate roadmap and contribute to lunar exploration. Thales Alenia Space - Italia is leading 3 studies for the European Space Agency focus on the analysis of the transportation, in-space and surface architectures required to meet ESA provided stakeholders exploration objectives and requirements. Main result of this activity is the identification of European near-term priorities for exploration missions and European long-term priorities for capability and technology developments related to planetary exploration missions. This paper will present the main studies' results drawing a European roadmap for exploration missions and capability and technology developments related to lunar exploration infrastructure development, taking into account the strategic and programmatic indications for exploration coming from ESA as well as the international exploration context.

  16. Architectures for Nanostructured Batteries

    NASA Astrophysics Data System (ADS)

    Rubloff, Gary

    2013-03-01

    Heterogeneous nanostructures offer profound opportunities for advancement in electrochemical energy storage, particularly with regard to power. However, their design and integration must balance ion transport, electron transport, and stability under charge/discharge cycling, involving fundamental physical, chemical and electrochemical mechanisms at nano length scales and across disparate time scales. In our group and in our DOE Energy Frontier Research Center (www.efrc.umd.edu) we have investigated single nanostructures and regular nanostructure arrays as batteries, electrochemical capacitors, and electrostatic capacitors to understand limiting mechanisms, using a variety of synthesis and characterization strategies. Primary lithiation pathways in heterogeneous nanostructures have been observed to include surface, interface, and both isotropic and anisotropic diffusion, depending on materials. Integrating current collection layers at the nano scale with active ion storage layers enhances power and can improve stability during cycling. For densely packed nanostructures as required for storage applications, we investigate both ``regular'' and ``random'' architectures consistent with transport requirements for spatial connectivity. Such configurations raise further important questions at the meso scale, such as dynamic ion and electron transport in narrow and tortuous channels, and the role of defect structures and their evolution during charge cycling. Supported as part of the Nanostructures for Electrical Energy Storage, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DESC0001160

  17. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  18. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  19. Planetary cubesats - mission architectures

    NASA Astrophysics Data System (ADS)

    Bousquet, Pierre W.; Ulamec, Stephan; Jaumann, Ralf; Vane, Gregg; Baker, John; Clark, Pamela; Komarek, Tomas; Lebreton, Jean-Pierre; Yano, Hajime

    2016-07-01

    Miniaturisation of technologies over the last decade has made cubesats a valid solution for deep space missions. For example, a spectacular set 13 cubesats will be delivered in 2018 to a high lunar orbit within the frame of SLS' first flight, referred to as Exploration Mission-1 (EM-1). Each of them will perform autonomously valuable scientific or technological investigations. Other situations are encountered, such as the auxiliary landers / rovers and autonomous camera that will be carried in 2018 to asteroid 1993 JU3 by JAXA's Hayabusas 2 probe, and will provide complementary scientific return to their mothership. In this case, cubesats depend on a larger spacecraft for deployment and other resources, such as telecommunication relay or propulsion. For both situations, we will describe in this paper how cubesats can be used as remote observatories (such as NEO detection missions), as technology demonstrators, and how they can perform or contribute to all steps in the Deep Space exploration sequence: Measurements during Deep Space cruise, Body Fly-bies, Body Orbiters, Atmospheric probes (Jupiter probe, Venus atmospheric probes, ..), Static Landers, Mobile landers (such as balloons, wheeled rovers, small body rovers, drones, penetrators, floating devices, …), Sample Return. We will elaborate on mission architectures for the most promising concepts where cubesat size devices offer an advantage in terms of affordability, feasibility, and increase of scientific return.

  20. Systolic architecture for heirarchical clustering

    SciTech Connect

    Ku, L.C.

    1984-01-01

    Several hierarchical clustering methods (including single-linkage complete-linkage, centroid, and absolute overlap methods) are reviewed. The absolute overlap clustering method is selected for the design of systolic architecture mainly due to its simplicity. Two versions of systolic architectures for the absolute overlap hierarchical clustering algorithm are proposed: one-dimensional version that leads to the development of a two dimensional version which fully takes advantage of the underlying data structure of the problems. The two dimensional systolic architecture can achieve a time complexity of O(m + n) in comparison with the conventional computer implementation of a time complexity of O(m/sup 2*/n).

  1. System architectures for telerobotic research

    NASA Technical Reports Server (NTRS)

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  2. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  3. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  4. Telemedicine system interoperability architecture: concept description and architecture overview.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  5. Adaptive immunity in the liver.

    PubMed

    Shuai, Zongwen; Leung, Miranda Wy; He, Xiaosong; Zhang, Weici; Yang, Guoxiang; Leung, Patrick Sc; Eric Gershwin, M

    2016-05-01

    The anatomical architecture of the human liver and the diversity of its immune components endow the liver with its physiological function of immune competence. Adaptive immunity is a major arm of the immune system that is organized in a highly specialized and systematic manner, thus providing long-lasting protection with immunological memory. Adaptive immunity consists of humoral immunity and cellular immunity. Cellular immunity is known to have a crucial role in controlling infection, cancer and autoimmune disorders in the liver. In this article, we will focus on hepatic virus infections, hepatocellular carcinoma and autoimmune disorders as examples to illustrate the current understanding of the contribution of T cells to cellular immunity in these maladies. Cellular immune suppression is primarily responsible for chronic viral infections and cancer. However, an uncontrolled auto-reactive immune response accounts for autoimmunity. Consequently, these immune abnormalities are ascribed to the quantitative and functional changes in adaptive immune cells and their subsets, innate immunocytes, chemokines, cytokines and various surface receptors on immune cells. A greater understanding of the complex orchestration of the hepatic adaptive immune regulators during homeostasis and immune competence are much needed to identify relevant targets for clinical intervention to treat immunological disorders in the liver. PMID:26996069

  6. Adaptive immunity in the liver

    PubMed Central

    Shuai, Zongwen; Leung, Miranda WY; He, Xiaosong; Zhang, Weici; Yang, Guoxiang; Leung, Patrick SC; Eric Gershwin, M

    2016-01-01

    The anatomical architecture of the human liver and the diversity of its immune components endow the liver with its physiological function of immune competence. Adaptive immunity is a major arm of the immune system that is organized in a highly specialized and systematic manner, thus providing long-lasting protection with immunological memory. Adaptive immunity consists of humoral immunity and cellular immunity. Cellular immunity is known to have a crucial role in controlling infection, cancer and autoimmune disorders in the liver. In this article, we will focus on hepatic virus infections, hepatocellular carcinoma and autoimmune disorders as examples to illustrate the current understanding of the contribution of T cells to cellular immunity in these maladies. Cellular immune suppression is primarily responsible for chronic viral infections and cancer. However, an uncontrolled auto-reactive immune response accounts for autoimmunity. Consequently, these immune abnormalities are ascribed to the quantitative and functional changes in adaptive immune cells and their subsets, innate immunocytes, chemokines, cytokines and various surface receptors on immune cells. A greater understanding of the complex orchestration of the hepatic adaptive immune regulators during homeostasis and immune competence are much needed to identify relevant targets for clinical intervention to treat immunological disorders in the liver. PMID:26996069

  7. Effects of non-uniform windowing in a Rician-fading channel and simulation of adaptive automatic repeat request protocols

    NASA Astrophysics Data System (ADS)

    Kmiecik, Chris G.

    1990-06-01

    Two aspects of digital communication were investigated. In the first part, a Fast Fourier Transformation (FFT) based, M-ary frequency shift keying (FSK) receiver in a Rician-fading channel was analyzed to determine the benefits of non-uniform windowing of sampled received data. When a frequency offset occurs, non-uniform windowing provided better FFT magnitude separation. The improved dynamic range was balanced against a loss in detectability due to signal attenuation. With large frequency offset, the improved magnitude separation outweighed the loss in detectability. An analysis was carried out to determine what frequency deviation is necessary for non-uniform windowing to out-perform uniform windowing in a slow Rician-fading channel. Having established typical values of probability of bit errors, the second part of this thesis looked at improving throughput in a digital communications network by applying adaptive automatic repeat request (ARQ) protocols. The results of simulations of adaptive ARQ protocols with variable frame lengths is presented. By varying the frame length, improved throughput performance through all bit error rates was achieved.

  8. Look and Do Ancient Egypt. Teacher's Manual: Primary Program, Ancient Egypt Art & Architecture [and] Workbook: The Art and Architecture of Ancient Egypt [and] K-4 Videotape. History through Art and Architecture.

    ERIC Educational Resources Information Center

    Luce, Ann Campbell

    This resource contains a teaching manual, reproducible student workbook, and color teaching poster, which were designed to accompany a 2-part, 34-minute videotape, but may be adapted for independent use. Part 1 of the program, "The Old Kingdom," explains Egyptian beliefs concerning life after death as evidenced in art, architecture and the…

  9. The IVOA Architecture

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Gaudet, S.; IVOA Technical Coordination Group

    2012-09-01

    Astronomy produces large amounts of data of many kinds, coming from various sources: science space missions, ground based telescopes, theoretical models, compilation of results, etc. These data and associated processing services are made available via the Internet by "providers", usually large data centres or smaller teams (see Figure 1). The "consumers", be they individual researchers, research teams or computer systems, access these services to do their science. However, inter-connection amongst all these services and between providers and consumers is usually not trivial. The Virtual Observatory (VO) is the necessary "middle layer" framework enabling interoperability between all these providers and consumers in a seamless and transparent manner. Like the web which enables end users and machines to access transparently documents and services wherever and however they are stored, the VO enables the astronomy community to access data and service resources wherever and however they are provided. Over the last decade, the International Virtual Observatory Alliance (IVOA) has been defining various standards to build the VO technical framework for the providers to share their data and services ("Sharing"), and to allow users to find ("Finding") these resources, to get them ("Getting") and to use them ("Using"). To enable these functionalities, the definition of some core astronomically-oriented standards ("VO Core") has also been necessary. This paper will present the official and current IVOA Architecture[1], describing the various building blocks of the VO framework (see Figure 2) and their relation to all existing and in-progress IVOA standards. Additionally, it will show examples of these standards in action, connecting VO "consumers" to VO "providers".

  10. Project Integration Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  11. RASSP signal processing architectures

    NASA Astrophysics Data System (ADS)

    Shirley, Fred; Bassett, Bob; Letellier, J. P.

    1995-06-01

    display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.

  12. Dynamic Information Architecture System

    SciTech Connect

    Christiansen, John

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) type map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.

  13. Dynamic Information Architecture System

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) typemore » map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.« less

  14. The Mothership Mission Architecture

    NASA Astrophysics Data System (ADS)

    Ernst, S. M.; DiCorcia, J. D.; Bonin, G.; Gump, D.; Lewis, J. S.; Foulds, C.; Faber, D.

    2015-12-01

    The Mothership is considered to be a dedicated deep space carrier spacecraft. It is currently being developed by Deep Space Industries (DSI) as a mission concept that enables a broad participation in the scientific exploration of small bodies - the Mothership mission architecture. A Mothership shall deliver third-party nano-sats, experiments and instruments to Near Earth Asteroids (NEOs), comets or moons. The Mothership service includes delivery of nano-sats, communication to Earth and visuals of the asteroid surface and surrounding area. The Mothership is designed to carry about 10 nano-sats, based upon a variation of the Cubesat standard, with some flexibility on the specific geometry. The Deep Space Nano-Sat reference design is a 14.5 cm cube, which accommodates the same volume as a traditional 3U CubeSat. To reduce cost, Mothership is designed as a secondary payload aboard launches to GTO. DSI is offering slots for nano-sats to individual customers. This enables organizations with relatively low operating budgets to closely examine an asteroid with highly specialized sensors of their own choosing and carry out experiments in the proximity of or on the surface of an asteroid, while the nano-sats can be built or commissioned by a variety of smaller institutions, companies, or agencies. While the overall Mothership mission will have a financial volume somewhere between a European Space Agencies' (ESA) S- and M-class mission for instance, it can be funded through a number of small and individual funding sources and programs, hence avoiding the processes associated with traditional space exploration missions. DSI has been able to identify a significant interest in the planetary science and nano-satellite communities.

  15. Architecture and the Information Revolution.

    ERIC Educational Resources Information Center

    Driscoll, Porter; And Others

    1982-01-01

    Traces how technological changes affect the architecture of the workplace. Traces these effects from the industrial revolution up through the computer revolution. Offers suggested designs for the computerized office of today and tomorrow. (JM)

  16. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  17. Transverse pumped laser amplifier architecture

    DOEpatents

    Bayramian, Andrew James; Manes, Kenneth R.; Deri, Robert; Erlandson, Alvin; Caird, John; Spaeth, Mary L.

    2015-05-19

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  18. Transverse pumped laser amplifier architecture

    DOEpatents

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  19. Thin, nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth (Inventor); Hughes, Eli (Inventor)

    2008-01-01

    A thin, nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  20. Thin, nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth (Inventor); Hughes, Eli (Inventor)

    2007-01-01

    A thin, nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  1. Thin nearly wireless adaptive optical device

    NASA Technical Reports Server (NTRS)

    Knowles, Gareth J. (Inventor); Hughes, Eli (Inventor)

    2009-01-01

    A thin nearly wireless adaptive optical device capable of dynamically modulating the shape of a mirror in real time to compensate for atmospheric distortions and/or variations along an optical material is provided. The device includes an optical layer, a substrate, at least one electronic circuit layer with nearly wireless architecture, an array of actuators, power electronic switches, a reactive force element, and a digital controller. Actuators are aligned so that each axis of expansion and contraction intersects both substrate and reactive force element. Electronics layer with nearly wireless architecture, power electronic switches, and digital controller are provided within a thin-film substrate. The size and weight of the adaptive optical device is solely dominated by the size of the actuator elements rather than by the power distribution system.

  2. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  3. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization. PMID:10124636

  4. Adaptive heterogeneous multi-robot teams

    SciTech Connect

    Parker, L.E.

    1998-11-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail the experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  5. Non-Linear Pattern Formation in Bone Growth and Architecture

    PubMed Central

    Salmon, Phil

    2014-01-01

    The three-dimensional morphology of bone arises through adaptation to its required engineering performance. Genetically and adaptively bone travels along a complex spatiotemporal trajectory to acquire optimal architecture. On a cellular, micro-anatomical scale, what mechanisms coordinate the activity of osteoblasts and osteoclasts to produce complex and efficient bone architectures? One mechanism is examined here – chaotic non-linear pattern formation (NPF) – which underlies in a unifying way natural structures as disparate as trabecular bone, swarms of birds flying, island formation, fluid turbulence, and others. At the heart of NPF is the fact that simple rules operating between interacting elements, and Turing-like interaction between global and local signals, lead to complex and structured patterns. The study of “group intelligence” exhibited by swarming birds or shoaling fish has led to an embodiment of NPF called “particle swarm optimization” (PSO). This theoretical model could be applicable to the behavior of osteoblasts, osteoclasts, and osteocytes, seeing them operating “socially” in response simultaneously to both global and local signals (endocrine, cytokine, mechanical), resulting in their clustered activity at formation and resorption sites. This represents problem-solving by social intelligence, and could potentially add further realism to in silico computer simulation of bone modeling. What insights has NPF provided to bone biology? One example concerns the genetic disorder juvenile Pagets disease or idiopathic hyperphosphatasia, where the anomalous parallel trabecular architecture characteristic of this pathology is consistent with an NPF paradigm by analogy with known experimental NPF systems. Here, coupling or “feedback” between osteoblasts and osteoclasts is the critical element. This NPF paradigm implies a profound link between bone regulation and its architecture: in bone the architecture is the regulation. The former is the

  6. Non-linear pattern formation in bone growth and architecture.

    PubMed

    Salmon, Phil

    2014-01-01

    The three-dimensional morphology of bone arises through adaptation to its required engineering performance. Genetically and adaptively bone travels along a complex spatiotemporal trajectory to acquire optimal architecture. On a cellular, micro-anatomical scale, what mechanisms coordinate the activity of osteoblasts and osteoclasts to produce complex and efficient bone architectures? One mechanism is examined here - chaotic non-linear pattern formation (NPF) - which underlies in a unifying way natural structures as disparate as trabecular bone, swarms of birds flying, island formation, fluid turbulence, and others. At the heart of NPF is the fact that simple rules operating between interacting elements, and Turing-like interaction between global and local signals, lead to complex and structured patterns. The study of "group intelligence" exhibited by swarming birds or shoaling fish has led to an embodiment of NPF called "particle swarm optimization" (PSO). This theoretical model could be applicable to the behavior of osteoblasts, osteoclasts, and osteocytes, seeing them operating "socially" in response simultaneously to both global and local signals (endocrine, cytokine, mechanical), resulting in their clustered activity at formation and resorption sites. This represents problem-solving by social intelligence, and could potentially add further realism to in silico computer simulation of bone modeling. What insights has NPF provided to bone biology? One example concerns the genetic disorder juvenile Pagets disease or idiopathic hyperphosphatasia, where the anomalous parallel trabecular architecture characteristic of this pathology is consistent with an NPF paradigm by analogy with known experimental NPF systems. Here, coupling or "feedback" between osteoblasts and osteoclasts is the critical element. This NPF paradigm implies a profound link between bone regulation and its architecture: in bone the architecture is the regulation. The former is the emergent

  7. A framework for constructing adaptive and reconfigurable systems

    SciTech Connect

    Poirot, Pierre-Etienne; Nogiec, Jerzy; Ren, Shangping; /IIT, Chicago

    2007-05-01

    This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a system's functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, modeling and correction into separate modules to allow for easier customization of the adaptive behavior and flexibility in selecting implementation technologies.

  8. Multicore Architecture-aware Scientific Applications

    SciTech Connect

    Srinivasa, Avinash

    2011-11-28

    Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a large scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.

  9. On-board multispectral classification study. Volume 2: Supplementary tasks. [adaptive control

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The operational tasks of the onboard multispectral classification study were defined. These tasks include: sensing characteristics for future space applications; information adaptive systems architectural approaches; data set selection criteria; and onboard functional requirements for interfacing with global positioning satellites.

  10. Selected reprints on dataflow and reduction architectures

    SciTech Connect

    Thakkar, S.S.

    1987-01-01

    This reprint collection looks at alternatives to von Neumann architecture: dataflow and reduction architectures and is organized into eight chapters that cover: different dataflow systems; dataflow solution to multiprocessing; dataflow languages and dataflow graphs; functional programming languages and their implementation; uniprocessor architectures that provide support for reduction; parallel graph reduction machines, and hybrid multiprocessor architectures.

  11. Space-based RF signal classification using adaptive wavelet features

    SciTech Connect

    Caffrey, M.; Briles, S.

    1995-04-01

    RF signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly- chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provide a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped-radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neutral network classifier. The performance of the adaptive wavelet classifier is the compared to an FFT based neural network classifier.

  12. Architectural Analysis of Dynamically Reconfigurable Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  13. Bipartite memory network architectures for parallel processing

    SciTech Connect

    Smith, W.; Kale, L.V. . Dept. of Computer Science)

    1990-01-01

    Parallel architectures are boradly classified as either shared memory or distributed memory architectures. In this paper, the authors propose a third family of architectures, called bipartite memory network architectures. In this architecture, processors and memory modules constitute a bipartite graph, where each processor is allowed to access a small subset of the memory modules, and each memory module allows access from a small set of processors. The architecture is particularly suitable for computations requiring dynamic load balancing. The authors explore the properties of this architecture by examining the Perfect Difference set based topology for the graph. Extensions of this topology are also suggested.

  14. Exploration Space Suit Architecture: Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.

    2010-01-01

    This paper picks up where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars (Hill, Johnson, IEEEAC paper #1209) left off in the development of a space suit architecture that is modular in design and interfaces and could be reconfigured to meet the mission or during any given mission depending on the tasks or destination. This paper will walk though the continued development of a space suit system architecture, and how it should evolve to meeting the future exploration EVA needs of the United States space program. In looking forward to future US space exploration and determining how the work performed to date in the CxP and how this would map to a future space suit architecture with maximum re-use of technology and functionality, a series of thought exercises and analysis have provided a strong indication that the CxP space suit architecture is well postured to provide a viable solution for future exploration missions. Through the destination environmental analysis that is presented in this paper, the modular architecture approach provides the lowest mass, lowest mission cost for the protection of the crew given any human mission outside of low Earth orbit. Some of the studies presented here provide a look and validation of the non-environmental design drivers that will become every-increasingly important the further away from Earth humans venture and the longer they are away. Additionally, the analysis demonstrates a logical clustering of design environments that allows a very focused approach to technology prioritization, development and design that will maximize the return on investment independent of any particular program and provide architecture and design solutions for space suit systems in time or ahead of being required for any particular manned flight program in the future. The new approach to space suit design and interface definition the discussion will show how the architecture is very adaptable to programmatic and funding changes with

  15. Architectures for statically scheduled dataflow

    SciTech Connect

    Lee, E.A.; Bier, J.C. )

    1990-12-01

    When dataflow program graphs can be statically scheduled, little run-time overhead (software or hardware) is necessary. This paper describes a class of parallel architectures consisting of von Neumann processors and one or more shared memories, where the order of shared- memory access is determined at compile time and enforced at run time. The architecture is extremely lean in hardware, yet for a set of important applications it can perform as well as any shared-memory architecture. Dataflow graphs can be mapped onto it statically. Furthermore, it supports shared data structures without the run-time overhead of I-structures. A software environment has been constructed that automatically maps signal processing applications onto a simulation of such an architecture, where the architecture is implemented using Motorola DSP96002 microcomputers. Static (compile-time) scheduling is possible for a subclass of dataflow program graphs where the firing pattern of actors is data independent. This model is suitable for digital signal processing and some other scientific computation. It supports recurrences, manifest iteration, and conditional assignment. However, it does not support true recursion, data-dependent iteration, or conditional evaluation. An effort is under way to weaken the constraints of the model to determine the implications for hardware design.

  16. Lunar Navigation Architecture Design Considerations

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  17. Architectures for intelligent robots in the age of exploitation

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Sarkar, Saurabh; Mathur, Kovid; Tennety, Srinivas

    2009-01-01

    History shows that problems that cause human confusion often lead to inventions to solve the problems, which then leads to exploitation of the invention, creating a confusion-invention-exploitation cycle. Robotics, which started as a new type of universal machine implemented with a computer controlled mechanism in the 1960's, has progressed from an Age of Over-expectation, a Time of Nightmare, an Age of Realism, and is now entering the Age of Exploitation. The purpose of this paper is to propose architecture for the modern intelligent robot in which sensors permit adaptation to changes in the environment are combined with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. This ideal model may be compared to various controllers that have been implemented using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Several prototypes and simulations are considered in view of peta-computing. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications.

  18. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  19. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  20. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    NASA Astrophysics Data System (ADS)

    Solomon, D.; van Dijk, A.

    The "2002 ESA Lunar Architecture Workshop" (June 3-16) ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL) is the first-of-its-kind workshop for exploring the design of extra-terrestrial (infra) structures for human exploration of the Moon and Earth-like planets introducing 'architecture's current line of research', and adopting an architec- tural criteria. The workshop intends to inspire, engage and challenge 30-40 European masters students from the fields of aerospace engineering, civil engineering, archi- tecture, and art to design, validate and build models of (infra) structures for Lunar exploration. The workshop also aims to open up new physical and conceptual terrain for an architectural agenda within the field of space exploration. A sound introduc- tion to the issues, conditions, resources, technologies, and architectural strategies will initiate the workshop participants into the context of lunar architecture scenarios. In my paper and presentation about the development of the ideology behind this work- shop, I will comment on the following questions: * Can the contemporary architectural agenda offer solutions that affect the scope of space exploration? It certainly has had an impression on urbanization and colonization of previously sparsely populated parts of Earth. * Does the current line of research in architecture offer any useful strategies for com- bining scientific interests, commercial opportunity, and public space? What can be learned from 'state of the art' architecture that blends commercial and public pro- grammes within one location? * Should commercial 'colonisation' projects in space be required to provide public space in a location where all humans present are likely to be there in a commercial context? Is the wave in Koolhaas' new Prada flagship store just a gesture to public space, or does this new concept in architecture and shopping evolve the public space? * What can we learn about designing (infra-) structures on the Moon or any other

  1. Self-Consistent Simulations of Inductively Coupled Discharges at Very Low Pressures Using a FFT Method for Calculating the Non-local Electron Conductivity for the General Case of a Non-Uniform Plasma

    NASA Astrophysics Data System (ADS)

    Polomarov, Oleg; Theodosiou, Constantine; Kaganovich, Igor

    2003-10-01

    A self-consistent system of equations for the kinetic description of non-local, non-uniform, nearly collisionless plasmas of low-pressure discharges is presented. The system consists of a non-local conductivity operator, and a kinetic equation for the electron distribution function (EEDF) averaged over fast electron bounce motions. A Fast Fourier Transform (FFT) method was applied to speed up the numerical simulations. The importance of accounting for the non-uniform plasma density profile in computing the current density profile and the EEDF is demonstrated. Effects of plasma non-uniformity on electron heating in rf electric field have also been studied. An enhancement of the electron heating due to the bounce resonance between the electron bounce motion and the rf electric field has been observed. Additional information on the subject is posted in http://www.pppl.gov/pub_report/2003/PPPL-3814-abs.html and in http://arxiv.org/abs/physics/0211009

  2. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  3. Expression of the 1-SST and 1-FFT genes and consequent fructan accumulation in Agave tequilana and A. inaequidens is differentially induced by diverse (a)biotic-stress related elicitors.

    PubMed

    Suárez-González, Edgar Martín; López, Mercedes G; Délano-Frier, John P; Gómez-Leyva, Juan Florencio

    2014-02-15

    The expression of genes coding for sucrose:sucrose 1-fructosyltransferase (1-SST; EC 2.4.1.99) and fructan:fructan 1-fructosyltransferase (1-FFT; EC 2.4.1.100), both fructan biosynthesizing enzymes, characterization by TLC and HPAEC-PAD, as well as the quantification of the fructo-oligosaccharides (FOS) accumulating in response to the exogenous application of sucrose, kinetin (cytokinin) or other plant hormones associated with (a)biotic stress responses were determined in two Agave species grown in vitro, domesticated Agave tequilana var. azul and wild A. inaequidens. It was found that elicitors such as salicylic acid (SA), and jasmonic acid methyl ester (MeJA) had the strongest effect on fructo-oligosaccharide (FOS) accumulation. The exogenous application of 1mM SA induced a 36-fold accumulation of FOS of various degrees of polymerization (DP) in stems of A. tequilana. Other treatments, such as 50mM abscisic acid (ABA), 8% Sucrose (Suc), and 1.0 mg L(-1) kinetin (KIN) also led to a significant accumulation of low and high DP FOS in this species. Conversely, treatment with 200 μM MeJA, which was toxic to A. tequilana, induced an 85-fold accumulation of FOS in the stems of A. inaequidens. Significant FOS accumulation in this species also occurred in response to treatments with 1mM SA, 8% Suc, and 10% polyethylene glycol (PEG). Maximum yields of 13.6 and 8.9 mg FOS per g FW were obtained in stems of A. tequilana and A. inaequidens, respectively. FOS accumulation in the above treatments was tightly associated with increased expression levels of either the 1-FFT or the 1-SST gene in tissues of both Agave species. PMID:23988562

  4. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme. PMID:24126252

  5. Bioarchitecture: bioinspired art and architecture--a perspective.

    PubMed

    Ripley, Renee L; Bhushan, Bharat

    2016-08-01

    Art and architecture can be an obvious choice to pair with science though historically this has not always been the case. This paper is an attempt to interact across disciplines, define a new genre, bioarchitecture, and present opportunities for further research, collaboration and professional cooperation. Biomimetics, or the copying of living nature, is a field that is highly interdisciplinary, involving the understanding of biological functions, structures and principles of various objects found in nature by scientists. Biomimetics can lead to biologically inspired design, adaptation or derivation from living nature. As applied to engineering, bioinspiration is a more appropriate term, involving interpretation, rather than direct copying. Art involves the creation of discrete visual objects intended by their creators to be appreciated by others. Architecture is a design practice that makes a theoretical argument and contributes to the discourse of the discipline. Bioarchitecture is a blending of art/architecture and biomimetics/bioinspiration, and incorporates a bioinspired design from the outset in all parts of the work at all scales. Herein, we examine various attempts to date of art and architecture to incorporate bioinspired design into their practice, and provide an outlook and provocation to encourage collaboration among scientists and designers, with the aim of achieving bioarchitecture.This article is part of the themed issue 'Bioinspired hierarchically structured surfaces for green science'. PMID:27354727

  6. Architecture of a Generic Telescope Control and Monitoring System

    NASA Astrophysics Data System (ADS)

    Mohile, V.; Purkar, C.

    2009-09-01

    This paper focuses on a proposed architecture for a Generic Control and Monitoring System (CMS) which can be adapted for any telescope system. This architecture is largely based on an in-progress specification project that PSL is carrying out for IUCAA and NCRA. Historically, the communication link between the telescope and its users at IUCAA and NCRA has been unfriendly. Also, previously it was difficult to maintain and there was no facility to add support for new features or new hardware on the fly. PSL is proposing a new contemporary open-source software based architecture to be applied to both radio and optical telescopes that resolves some of these issues. We present the high-level architecture and design of this CMS. Specifically, we have proposed for the development of the commonality of GUI in platform-independent, modular, secure and robust Java environment. This application along with Extensible Markup Language-Document Type Definition (XML-DTD) structure can control the telescope as well as monitors the status of the telescope. Thus, using CMS we can provide various users having different access levels to control and monitor different telescope systems. The CMS thus achieves design objectives of being generic and not tightly coupled to the actual underlying hardware. In that way, it would enable easy and flexible upgrades of the hardware.

  7. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    SciTech Connect

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot`s own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.

  8. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture.

    PubMed

    González-Plaza, Juan J; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species. PMID:26973682

  9. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture

    PubMed Central

    González-Plaza, Juan J.; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F.; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R.; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R.

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species. PMID:26973682

  10. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    PubMed

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed

  11. Architecture, modeling, and analysis of a plasma impedance probe

    NASA Astrophysics Data System (ADS)

    Jayaram, Magathi

    Variations in ionospheric plasma density can cause large amplitude and phase changes in the radio waves passing through this region. Ionospheric weather can have detrimental effects on several communication systems, including radars, navigation systems such as the Global Positioning Sytem (GPS), and high-frequency communications. As a result, creating models of the ionospheric density is of paramount interest to scientists working in the field of satellite communication. Numerous empirical and theoretical models have been developed to study the upper atmosphere climatology and weather. Multiple measurements of plasma density over a region are of marked importance while creating these models. The lack of spatially distributed observations in the upper atmosphere is currently a major limitation in space weather research. A constellation of CubeSat platforms would be ideal to take such distributed measurements. The use of miniaturized instruments that can be accommodated on small satellites, such as CubeSats, would be key to achieving these science goals for space weather. The accepted instrumentation techniques for measuring the electron density are the Langmuir probes and the Plasma Impedance Probe (PIP). While Langmuir probes are able to provide higher resolution measurements of relative electron density, the Plasma Impedance Probes provide absolute electron density measurements irrespective of spacecraft charging. The central goal of this dissertation is to develop an integrated architecture for the PIP that will enable space weather research from CubeSat platforms. The proposed PIP chip integrates all of the major analog and mixed-signal components needed to perform swept-frequency impedance measurements. The design's primary innovation is the integration of matched Analog-to-Digital Converters (ADC) on a single chip for sampling the probes current and voltage signals. A Fast Fourier Transform (FFT) is performed by an off-chip Field-Programmable Gate Array (FPGA

  12. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  13. Laser guide star adaptive optics: Present and future

    SciTech Connect

    Olivier, S.S.; Max, C.E.

    1993-03-01

    Feasibility demonstrations using one to two meter telescopes have confirmed the utility of laser beacons as wavefront references for adaptive optics systems. Laser beacon architectures suitable for the new generation of eight and ten meter telescopes are presently under study. This paper reviews the concept of laser guide star adaptive optics and the progress that has been made by groups around the world implementing such systems. A description of the laser guide star program at LLNL and some experimental results is also presented.

  14. ATMTN: a telemammography network architecture.

    PubMed

    Sheybani, Ehsan O; Sankar, Ravi

    2002-12-01

    One of the goals of the National Cancer Institute (NCI) to reach more than 80% of eligible women in mammography screening by the year 2000 yet remains as a challenge. In fact, a recent medical report reveals that while other types of cancer are experiencing negative growth, breast cancer has been the only one with a positive growth rate over the last few years. This is primarily due to the fact that 1) examination process is a complex and lengthy one and 2) it is not available to the majority of women who live in remote sites. Currently for mammography screening, women have to go to doctors or cancer centers/hospitals annually while high-risk patients may have to visit more often. One way to resolve these problems is by the use of advanced networking technologies and signal processing algorithms. On one hand, software modules can help detect, with high precision, true negatives (TN), while marking true positives (TP) for further investigation. Unavoidably, in this process some false negatives (FN) will be generated that are potentially life threatening; however, inclusion of the detection software improves the TP detection and, hence, reduces FNs drastically. Since TNs are the majority of examinations on a randomly selected population, this first step reduces the load on radiologists by a tremendous amount. On the other hand, high-speed networking equipment can accelerate the required clinic-lab connection and make detection, segmentation, and image enhancement algorithms readily available to the radiologists. This will bring the breast cancer care, caregiver, and the facilities to the patients and expand diagnostics and treatment to the remote sites. This research describes asynchronous transfer mode telemammography network (ATMTN) architecture for real-time, online screening, detection and diagnosis of breast cancer. ATMTN is a unique high-speed network integrated with automatic robust computer-assisted diagnosis-detection/digital signal processing (CAD

  15. Airport Surface Network Architecture Definition

    NASA Technical Reports Server (NTRS)

    Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.

    2006-01-01

    Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network

  16. ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.

    ERIC Educational Resources Information Center

    ALLERHAND, MELVIN E.; AND OTHERS

    A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…

  17. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  18. Software design by reusing architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay; Nii, H. Penny

    1992-01-01

    Abstraction fosters reuse by providing a class of artifacts that can be instantiated or customized to produce a set of artifacts meeting different specific requirements. It is proposed that significant leverage can be obtained by abstracting software system designs and the design process. The result of such an abstraction is a generic architecture and a set of knowledge-based, customization tools that can be used to instantiate the generic architecture. An approach for designing software systems based on the above idea are described. The approach is illustrated through an implemented example, and the advantages and limitations of the approach are discussed.

  19. Frame architecture for video servers

    NASA Astrophysics Data System (ADS)

    Venkatramani, Chitra; Kienzle, Martin G.

    1999-11-01

    Video is inherently frame-oriented and most applications such as commercial video processing require to manipulate video in terms of frames. However, typical video servers treat videos as byte streams and perform random access based on approximate byte offsets to be supplied by the client. They do not provide frame or timecode oriented API which is essential for many applications. This paper describes a frame-oriented architecture for video servers. It also describes the implementation in the context of IBM's VideoCharger server. The later part of the paper describes an application that uses the frame architecture and provides fast and slow-motion scanning capabilities to the server.

  20. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.