Science.gov

Sample records for fpga design framework

  1. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  2. Building a multi-FPGA-based emulation framework to support networks-on-chip design and verification

    NASA Astrophysics Data System (ADS)

    Liu, Yangfan; Liu, Peng; Jiang, Yingtao; Yang, Mei; Wu, Kejun; Wang, Weidong; Yao, Qingdong

    2010-10-01

    In this article, we present a highly scalable, flexible hardware-based network-on-chip (NoC) emulation framework, through which NoCs built upon various types of network topologies, routing algorithms, switching protocols and flow control schemes can be explored, compared, and validated with injected or self-generated traffic from both real-life and synthetic applications. This high degree of scalability and flexibility is achieved due to the field programmable gate array (FPGA) design choices made at both functional and physical levels. At the functional level, a NoC system to be emulated can be partitioned into two parts: (i) the processing cores and (ii) the network. Each part is mapped onto a different FPGA so that when there is any change to be made to any one of these parts, only the corresponding FPGA needs to be reconfigured and the rest of the FPGAs will be left untouched. At the physical level, two levels of interconnects are adopted to mimic NoC on-chip communications: high bandwidth and low latency parallel on-board wires, and high-speed serial multigigabit transceivers available in FPGAs. The latter is particularly important as it helps the proposed NoC emulation platform scale well with the size increase of the NoCs.

  3. FPGA design and implementation of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Yang, Zhihui; Zhou, Gang

    2015-12-01

    In this paper , we choose four different variances of 1,3,6 and 12 to conduct FPGA design with three kinds of Gaussian filtering algorithm ,they are implementing Gaussian filter with a Gaussian filter template, Gaussian filter approximation with mean filtering and Gaussian filter approximation with IIR filtering. By waveform simulation and synthesis, we get the processing results on the experimental image and the consumption of FPGA resources of the three methods. We set the result of Gaussian filter used in matlab as standard to get the result error. By comparing the FPGA resources and the error of FPGA implementation methods, we get the best FPGA design to achieve a Gaussian filter. Conclusions can be drawn based on the results we have already got. When the variance is small, the FPGA resources is enough for the algorithm to implement Gaussian filter with a Gaussian filter template which is the best choice. But when the variance is so large that there is no more FPGA resources, we can chose the mean to approximate Gaussian filter with IIR filtering.

  4. OpenACC to FPGA: A Framework for Directive-based High-Performance Reconfigurable Computing

    SciTech Connect

    Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S

    2016-01-01

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, this design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.

  5. FPGA Design Practices for I&C in Nuclear Power Plants

    SciTech Connect

    Bobrek, Miljko; Wood, Richard Thomas; Bouldin, Donald; Waterman, Michael E

    2009-01-01

    Safe FPGA design practices can be classified into three major groups covering board-level and FPGA logic-level design practices, FPGA design entry methods, and FPGA design methodology. This paper is presenting the most common hardware and software design practices that are acceptable in safety-critical FPGA systems. It also proposes an FPGA-specific design life cycle including design entry, FPGA synthesis, place and route, and validation and verification.

  6. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    ERIC Educational Resources Information Center

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  7. FPGA design and implementation for EIT data acquisition.

    PubMed

    Yue, Xicai; McLeod, Chris

    2008-10-01

    OXBACT-5 was designed to meet the challenges involved in working in the intensive care hospital environment focussed particularly on thoracic imaging of patients with respiratory distress and chronic heart failure (CHF). The FPGA-based wireless LAN linked multi-channel EIT data acquisition system (DAS) providing 16 programmable excitation current channels and 64 voltage measurement channels is presented. It contains function modules of a PCI bus interface, direct digital synthesizers, dual-port memory blocks, digital demodulation and all the command and control logic in the FPGA. The whole EIT data acquisition system is fully programmable and reconfigurable from the host PC. The excitation frequency, excitation patterns, the measuring sequence and the gain of each measurement channel can be set from the host PC before each measurement. The demodulation is implemented in the FPGA chip to reduce the data rate between the DAS and the host PC. In addition, measurement process management is achieved in this FPGA chip. Complemented by analogue devices such as ADCs, DACs, analogue buffers and analogue multiplexers, the new FPGA-based EIT DAS system is implemented in a very compact way for bedside use in intensive care units of hospitals. It is intended for applications such as continuous respiration monitoring with data collection at 25 frames per second. Image reconstruction times depend on the choice of 2D or 3D imaging algorithms and the available processing power.

  8. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  9. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  10. REALIZATION OF A CUSTOM DESIGNED FPGA BASED EMBEDDED CONTROLLER.

    SciTech Connect

    SEVERINO,F.; HARVEY, M.; HAYES, T.; HOFF, L.; ODDO, P.; SMITH, K.S.

    2007-10-15

    As part of the Low Level RF (LLRF) upgrade project at Brookhaven National Laboratory's Collider-Accelerator Department (BNL C-AD), we have recently developed and tested a prototype high performance embedded controller. This controller is a custom designed PMC module employing a Xilinx V4FX60 FPGA with a PowerPC405 embedded processor, and a wide variety of on board peripherals (DDR2 SDRAM, FLASH, Ethernet, PCI, multi-gigabit serial transceivers, etc.). The controller is capable of running either an embedded version of LINUX or VxWorks, the standard operating system for RHIC front end computers (FECs). We have successfully demonstrated functionality of this controller as a standard RHIC FEC and tested all on board peripherals. We now have the ability to develop complex, custom digital controllers within the framework of the standard RHIC control system infrastructure. This paper will describe various aspects of this development effort, including the basic hardware, functional capabilities, the development environment, kernel and system integration, and plans for further development.

  11. Design of transient light signal simulator based on FPGA

    NASA Astrophysics Data System (ADS)

    Kang, Jing; Chen, Rong-li; Wang, Hong

    2014-11-01

    A design scheme of transient light signal simulator based on Field Programmable gate Array (FPGA) was proposed in this paper. Based on the characteristics of transient light signals and measured feature points of optical intensity signals, a fitted curve was created in MATLAB. And then the wave data was stored in a programmed memory chip AT29C1024 by using SUPERPRO programmer. The control logic was realized inside one EP3C16 FPGA chip. Data readout, data stream cache and a constant current buck regulator for powering high-brightness LEDs were all controlled by FPGA. A 12-Bit multiplying CMOS digital-to-analog converter (DAC) DAC7545 and an amplifier OPA277 were used to convert digital signals to voltage signals. A voltage-controlled current source constituted by a NPN transistor and an operational amplifier controlled LED array diming to achieve simulation of transient light signal. LM3405A, 1A Constant Current Buck Regulator for Powering LEDs, was used to simulate strong background signal in space. Experimental results showed that the scheme as a transient light signal simulator can satisfy the requests of the design stably.

  12. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  13. Discrete wavelet transform FPGA design using MatLab/Simulink

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Vera, A.; Meyer-Baese, A.; Pattichis, M.; Perry, R.

    2006-04-01

    Design of current DSP applications using state-of-the art multi-million gates devices requires a broad foundation of the engineering shlls ranging from knowledge of hardware-efficient DSP algorithms to CAD design tools. The requirement of short time-to-market, however, requires to replace the traditional HDL based designs by a MatLab/Simulink based design flow. This not only allows the over 1 million MatLab users to design FPGAs but also to by-pass the hardware design engineer leading to a significant reduction in development time. Critical however with this design flow are: (1) quality-of-results, (2) sophistication of Simulink block library, (3) compile time, (4) cost and availability of development boards, and (5) cost, functionality, and ease-of-use of the FPGA vendor provided design tools.

  14. Design of high-resolution digital microscope eyepiece based on FPGA

    NASA Astrophysics Data System (ADS)

    Cai, Jin; Chen, Enguo; Liu, Peng; Yu, Feihong

    2012-10-01

    The paper presents a low-cost and portable digital microscope eyepiece based on Field Programmable gate Array (FPGA). A 1.3 million pixels CMOS (Complementary Metal Oxide Semiconductor) sensor is used as the imaging sensor. To get higher performance, the image pre-processing is completed on hardware. After that, image data are transmitted into frame buffer through transmission channel constructed by FIFO and DMA controller. The display controller gets the data from the frame buffer and sends them to the DVI/HDMI transmitter to encode the data by TMDS. All the control logic is realized inside one EP2C20 FPGA chip based on SoPC (System on a Programmable Chip) Framework and Nios II processer core is considered as the control center. The design makes full use of FPGA parallel and pipeline processing technology to achieve the hardware and software co-design, which complete high-resolution image acquisition, caching and display. The maximum resolution of real-time preview could reach SXGA (1280 x 1024) with the frame rate up to 15 fps. The system also integrates SD card interface, which captures the BMP format file into the SD (Security Digital) card.

  15. Design for Review - Applying Lessons Learned to Improve the FPGA Review Process

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Li, Kenneth E.

    2014-01-01

    Flight Field Programmable Gate Array (FPGA) designs are required to be independently reviewed. This paper provides recommendations to Flight FPGA designers to properly prepare their designs for review in order to facilitate the review process, and reduce the impact of the review time in the overall project schedule.

  16. Version control friendly project management system for FPGA designs

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.

    2016-09-01

    In complex FPGA designs, usage of version control system is a necessity. It is especially important in the case of designs developed by many developers or even by many teams. The standard development mode, however, offered by most FPGA vendors is the GUI based project mode. It is very convenient for a single developer, who can easily experiment with project settings, browse and modify the sources hierarchy, compile and test the design. Unfortunately, the project configuration is stored in files which are not suited for use with Version Control System (VCS). Another important problem in big FPGA designs is reuse of IP cores. Even though there are standard solutions like IEEE 1685-2014, they suffer from some limitations particularly significant for complex systems (e.g. only simple types are allowed for IP-core ports, it is not possible to use parametrized instances of IP-cores). Additionally, the overhead associated with packaging of IP-cores is significant and not justified for simple reusable blocks. This paper presents a system aimed at storing the whole design in a VCS oriented form. The hierarchy of sources is described with textual "extended project (EPRJ) files" which are fully controlled by the user and may also be put in a VCS. The IP blocks may be easily added to the project just by including the accompanying EPRJ file. Both absolute and relative file paths may be used which allows the flexible structure of directories. The sources of locally developed IP blocks may be stored in directories located inside the main source tree, while sources of independently developed blocks, using separate VCS repositories, may be located outside that tree. The environment allows splitting the design into smaller parts, which are synthesized independently. That reduces the time needed to recompile the whole design if only a few blocks are modified. The system creates the standard project, which can be used for convenient interactive work with the design. After the

  17. Design of extensible meteorological data acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Liu, Yin-hua; Zhang, Hui-jun; Li, Xiao-hui

    2015-02-01

    In order to compensate the tropospheric refraction error generated in the process of satellite navigation and positioning. Temperature, humidity and air pressure had to be used in concerned models to calculate the value of this error. While FPGA XC6SLX16 was used as the core processor, the integrated silicon pressure sensor MPX4115A and digital temperature-humidity sensor SHT75 are used as the basic meteorological parameter detection devices. The core processer was used to control the real-time sampling of ADC AD7608 and to acquire the serial output data of SHT75. The data was stored in the BRAM of XC6SLX16 and used to generate standard meteorological parameters in NEMA format. The whole design was based on Altium hardware platform and ISE software platform. The system was described in the VHDL language and schematic diagram to realize the correct detection of temperature, humidity, air pressure. The 8-channel synchronous sampling characteristics of AD7608 and programmable external resources of FPGA laid the foundation for the increasing of analog or digital meteorological element signal. The designed meteorological data acquisition system featured low cost, high performance, multiple expansions.

  18. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    SciTech Connect

    Tripp J. L.; Peterson, K. D.; Poznanovic, J. D.; Ahrens, C. M.; Gokhale, M.

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  19. FPGA-based floating-point datapath design for geometry processing

    NASA Astrophysics Data System (ADS)

    Xing, Shanzhen; Yu, William W.

    1998-10-01

    Geometry processing comprises of a great many computationally intensive floating-point operations. Real- time graphics systems generally use application-specific custom designed parallel hardware to provide the high performance computation power. When designing a graphics engine on a FPGA-based configurable computing system, cost- effectiveness is important. This paper investigates and proposes a cost-effective FPGA-based floating-point datapath for geometry process. It is designed to be a basic building block for FPGA-based geometry processors. The implemented datapath operates at a frequency of 6.25 Mhz and has an average floating-point operation time of 10.2 microseconds.

  20. A Design of Low Frequency Time-Code Receiver Based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Li, Guo-Dong; Xu, Lin-Sheng

    2006-06-01

    The hardware of a low frequency time-code receiver which was designed with FPGA (field programmable gate array) and DSP (digital signal processor) is introduced. The method of realizing the time synchronization for the receiver system is described. The software developed for DSP and FPGA is expounded, and the results of test and simulation are presented. The design is charcterized by high accuracy, good reliability, fair extensibility, etc.

  1. Single Event Analysis and Fault Injection Techniques Targeting Complex Designs Implemented in Xilinx-Virtex Family Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Kim, Hak

    2014-01-01

    An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.

  2. FPGA-Based Efficient Hardware/Software Co-Design for Industrial Systems with Consideration of Output Selection

    NASA Astrophysics Data System (ADS)

    Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.

    2016-05-01

    This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.

  3. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  4. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  5. An FPGA hardware/software co-design towards evolvable spiking neural networks for robotics application.

    PubMed

    Johnston, S P; Prasad, G; Maguire, L; McGinnity, T M

    2010-12-01

    This paper presents an approach that permits the effective hardware realization of a novel Evolvable Spiking Neural Network (ESNN) paradigm on Field Programmable Gate Arrays (FPGAs). The ESNN possesses a hybrid learning algorithm that consists of a Spike Timing Dependent Plasticity (STDP) mechanism fused with a Genetic Algorithm (GA). The design and implementation direction utilizes the latest advancements in FPGA technology to provide a partitioned hardware/software co-design solution. The approach achieves the maximum FPGA flexibility obtainable for the ESNN paradigm. The algorithm was applied as an embedded intelligent system robotic controller to solve an autonomous navigation and obstacle avoidance problem.

  6. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications.

    PubMed

    Sun, Li; Savory, Joshua J; Warncke, Kurt

    2013-08-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range.

  7. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications

    PubMed Central

    Sun, Li; Savory, Joshua J.; Warncke, Kurt

    2014-01-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range. PMID:25076864

  8. DESIGN AND ANALYSIS OF AN FPGA-BASED ACTIVE FEEDBACK DAMPING SYSTEM

    SciTech Connect

    Xie, Zaipeng; Schulte, Mike; Deibele, Craig Edmond

    2010-01-01

    The Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory is a high-intensity proton-based accelerator that produces neutron beams for neutronscattering research. As the most powerful pulsed neutron source in the world, the SNS accelerator has experienced an unprecedented beam instability that has a wide bandwidth (0 to 300MHz) and fast growth time (10 to100 s). In this paper, we propose and analyze several FPGA-based designs for an active feedback damping system. This signal processing system is the first FPGA-based design for active feedback damping of wideband instabilities in high intensity accelerators. It can effectively mitigate instabilities in highintensity protons beams, reduce radiation, and boost the accelerator s luminosity performance. Unlike existing systems, which are designed using analog components, our FPGA-based active feedback damping system offers programmability while maintaining high performance. To meet the system throughput and latency requirements, our proposed designs are guided by detailed analysis of resource and performance tradeoffs. These designs are mapped onto a reconfigurable platform that includes Xilinx Virtex-II Pro FPGAs and high-speed analog-to-digital and digital-toanalog converters. Our results show that our FPGA-based active feedback damping system can provide increased flexibility and improved signal processing performance that are not feasible with existing analog systems.

  9. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S; Caffrey, Michael P

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.

  10. Design of an FPGA-based electronic flow regulator (EFR) for spacecraft propulsion system

    NASA Astrophysics Data System (ADS)

    Manikandan, J.; Jayaraman, M.; Jayachandran, M.

    2011-02-01

    This paper describes a scheme for electronically regulating the flow of propellant to the thruster from a high-pressure storage tank used in spacecraft application. Precise flow delivery of propellant to thrusters ensures propulsion system operation at best efficiency by maximizing the propellant and power utilization for the mission. The proposed field programmable gate array (FPGA) based electronic flow regulator (EFR) is used to ensure precise flow of propellant to the thrusters from a high-pressure storage tank used in spacecraft application. This paper presents hardware and software design of electronic flow regulator and implementation of the regulation logic onto an FPGA.Motivation for proposed FPGA-based electronic flow regulation is on the disadvantages of conventional approach of using analog circuits. Digital flow regulation overcomes the analog equivalent as digital circuits are highly flexible, are not much affected due to noise, accurate performance is repeatable, interface is easier to computers, storing facilities are possible and finally failure rate of digital circuits is less. FPGA has certain advantages over ASIC and microprocessor/micro-controller that motivated us to opt for FPGA-based electronic flow regulator. Also the control algorithm being software, it is well modifiable without changing the hardware. This scheme is simple enough to adopt for a wide range of applications, where the flow is to be regulated for efficient operation.The proposed scheme is based on a space-qualified re-configurable field programmable gate arrays (FPGA) and hybrid micro circuit (HMC). A graphical user interface (GUI) based application software is also developed for debugging, monitoring and controlling the electronic flow regulator from PC COM port.

  11. Design and tuning of FPGA implementations of neural networks

    NASA Astrophysics Data System (ADS)

    Clare, Peter J. C.; Gulley, J. W.; Hickman, Duncan; Smith, Moira I.

    1997-06-01

    Artificial neural network (ANN) algorithms are applicable in a variety of roles for image processing in infrared search and track (IRST) systems. Achieving a high throughput is a key objective in developing ANNs for processing large numbers of pixels at high frame rates. Previous work has investigated the use of a neural core supported by configurable logic to achieve a versatile technology applicable to a variety of systems. The implementation of multi-layer perceptron (MLP) ANNs, using field programmable gate array (FPGA) technology to ensure upgradability and reconfigurability, is the focus of this research. Approximations to the MLP algorithms are needed to ensure that a high throughput can be achieved with a sufficiently low gate count.

  12. Embedded EMD algorithm within an FPGA-based design to classify nonlinear SDOF systems

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Wright, Joseph P.; Tull, Monte P.

    2010-04-01

    Compared with traditional microprocessor-based systems, rapidly advancing field-programmable gate array (FPGA) technology offers a more powerful, efficient and flexible hardware platform. An FPGA and microprocessor (i.e., hardware and software) co-design is developed to classify three types of nonlinearities (including linear, hardening and softening) of a single-degree-of-freedom (SDOF) system subjected to free vibration. This significantly advances the team's previous work on using FPGAs for wireless structural health monitoring. The classification is achieved by embedding two important algorithms - empirical mode decomposition (EMD) and backbone curve analysis. Design considerations to embed EMD in FPGA and microprocessor are discussed. In particular, the implementation of cubic spline fitting and the challenges encountered using both hardware and software environments are discussed. The backbone curve technique is fully implemented within the FPGA hardware and used to extract instantaneous characteristics from the uniformly distributed data sets produced by the EMD algorithm as presented in a previous SPIE conference by the team. An off-the-shelf high-level abstraction tool along with the MATLAB/Simulink environment is utilized to manage the overall FPGA and microprocessor co-design. Given the limited computational resources of an embedded system, we strive for a balance between the maximization of computational efficiency and minimization of resource utilization. The value of this study lies well beyond merely programming existing algorithms in hardware and software. Among others, extensive and intensive judgment is exercised involving experiences and insights with these algorithms, which renders processed instantaneous characteristics of the signals that are well-suited for wireless transmission.

  13. Effectiveness of Internal vs. External SEU Scrubbing Mitigation Strategies in a Xilinx FPGA: Design, Test, and Analysis

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Poivey C.; Petrick, D.; Espinosa, D.; Lesea, Austin; LaBel, K. A.; Friendlich, M; Kim, H; Phan, A.

    2008-01-01

    We compare two scrubbing mitigation schemes for Xilinx FPGA devices. The design of the scrubbers is briefly discussed along with an examination of mitigation limitations. Proton and Heavy Ion data are then presented and analyzed.

  14. Design and FPGA implementation of real-time automatic image enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin

    2016-11-01

    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  15. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  16. Design of miniature hybrid target recognition system with combination of FPGA+DSP

    NASA Astrophysics Data System (ADS)

    Luo, Shishang; Li, Xiujian; Jia, Hui; Hu, Wenhua; Nie, Yongming; Chang, Shengli

    2010-10-01

    With advantages of flexibility, high bandwidth, high spatial resolution and high-speed parallel operation, the opto-electronic hybrid target recognition system can be applied in many civil and military areas, such as video surveillance, intelligent navigation and robot vision. A miniature opto-electronic hybrid target recognition system based on FPGA+DSP is designed, which only employs single Fourier lens and with a focal length. With the precise timing control of the FPGA and images pretreatment of the DSP, the system performs both Fourier transform and inverse Fourier transform with all optical process, which can improve recognition speed and reduce the system volume remarkably. We analyzed the system performance, and a method to achieve scale invariant pattern recognition was proposed on the basis of lots of experiments.

  17. Fault Tolerance Implementation within SRAM Based FPGA Designs based upon Single Event Upset Occurrence Rates

    NASA Technical Reports Server (NTRS)

    Berg, Melanie

    2006-01-01

    Emerging technology is enabling the design community to consistently expand the amount of functionality that can be implemented within Integrated Circuits (ICs). As the number of gates placed within an FPGA increases, the complexity of the design can grow exponentially. Consequently, the ability to create reliable circuits has become an incredibly difficult task. In order to ease the complexity of design completion, the commercial design community has developed a very rigid (but effective) design methodology based on synchronous circuit techniques. In order to create faster, smaller and lower power circuits, transistor geometries and core voltages have decreased. In environments that contain ionizing energy, such a combination will increase the probability of Single Event Upsets (SEUs) and will consequently affect the state space of a circuit. In order to combat the effects of radiation, the aerospace community has developed several "Hardened by Design" (fault tolerant) design schemes. This paper will address design mitigation schemes targeted for SRAM Based FPGA CMOS devices. Because some mitigation schemes may be over zealous (too much power, area, complexity, etc.. . .), the designer should be conscious that system requirements can ease the amount of mitigation necessary for acceptable operation. Therefore, various degrees of Fault Tolerance will be demonstrated along with an analysis of its effectiveness.

  18. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  19. FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations

    SciTech Connect

    Harold D. Anderson, John T. Williams

    2009-07-01

    -of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.

  20. FPGA-based data processing module design of on-board radiometric calibration in visible/near infrared bands

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Li, Chenyang; Yue, Tao; Liu, Na; Jiang, Linjun; Sun, Yue; Li, Mingyan

    2015-12-01

    FPGA technology has long been applied to on-board radiometric calibration data processing however the integration of FPGA program is not good enough. For example, some sensors compressed remote sensing images and transferred to ground station to calculate the calibration coefficients. It will affect the timeliness of on-board radiometric calibration. This paper designs an integrated flow chart of on-board radiometric calibration. Building FPGA-based radiometric calibration data processing modules uses system generator. Thesis focuses on analyzing the calculation accuracy of FPGA-based two-point method and verifies the feasibility of this method. Calibration data was acquired by hardware platform which was built using integrating sphere, CMOS camera (canon 60d), ASD spectrometers and light filter (center wavelength: 690nm, bandwidth: 45nm). The platform can simulate single-band on-board radiometric calibration data acquisition in visible/near infrared band. Making an experiment of calibration coefficients calculation uses obtained data and FPGA modules. Experimental results show that: the camera linearity is above 99% meeting the experimental requirement. Compares with MATLAB the calculation accuracy of two-point method by FPGA are as follows: the error of gain value is 0.0053%; the error of offset value is 0.00038719%. Those results meet experimental accuracy requirement.

  1. Fpga based L-band pulse doppler radar design and implementation

    NASA Astrophysics Data System (ADS)

    Savci, Kubilay

    As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed

  2. Statechart-based design controllers for FPGA partial reconfiguration

    NASA Astrophysics Data System (ADS)

    Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo

    2015-09-01

    Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.

  3. Application in DSP/FPGA design of Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Liu, Yong-mei; Guan, Yong; Zhang, Jie; Wu, Min-hua; Wu, Lin-wei

    2012-12-01

    As an off-line simulation tool, the modular modelling method of Matlab/Simulik has the features of high efficiency and visualization. In order to realize the fast design and the simulation of prototype systems, the new method of SignalWAVe/Simulink mix modelling is presented, and the Reed-Solomon codec encoder-decoder model is built. Reed-Solomon codec encoder-decoder model is simulated by Simulink. Farther, the C language program and model the. out executable file are created by SignalWAVe RTW Options module, which completes the hard ware co-simulation. The simulation result conforms to the theoretical analysis, thus it has proven the validity and the feasibility of this method.

  4. FPGA Coprocessor Design for an Onboard Multi-Angle Spectro-Polarimetric Imager

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.

    2010-01-01

    A multi-angle spectro-polarimetric imager (MSPI) is an advanced camera system currently under development at JPL for possible future consideration on a satellite-based Aerosol-Cloud-Environ - ment (ACE) interaction study. The light in the optical system is subjected to a complex modulation designed to make the overall system robust against many instrumental artifacts that have plagued such measurements in the past. This scheme involves two photoelastic modulators that are beating in a carefully selected pattern against each other. In order to properly sample this modulation pattern, each of the proposed nine cameras in the system needs to read out its imager array about 1,000 times per second. The onboard processing required to compress this data involves least-squares fits (LSFs) of Bessel functions to data from every pixel in realtime, thus requiring an onboard computing system with advanced data processing capabilities in excess of those commonly available for space flight. As a potential solution to meet the MSPI onboard processing requirements, an LSF algorithm was developed on the Xilinx Virtex-4FX60 field programmable gate array (FPGA). In addition to configurable hardware capability, this FPGA includes Power -PC405 microprocessors, which together enable a combination hardware/ software processing system. A laboratory demonstration was carried out based on a hardware/ software co-designed processing architecture that includes hardware-based data collection and least-squares fitting (computationally), and softwarebased transcendental function computation (algorithmically complex) on the FPGA. Initial results showed that these calculations can be handled using a combination of the Virtex- 4TM Power-PC core and the hardware fabric.

  5. Design of a system based on DSP and FPGA for video recording and replaying

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  6. Reliability concerns with logical constants in Xilinx FPGA designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul; Morgan, Keith; Ostler, Patrick; Allen, Greg; Swift, Gary; Tseng, Chen W

    2009-01-01

    In Xilinx Field Programmable Gate Arrays logical constants, which ground unused inputs and provide constants for designs, are implemented in SEU-susceptible logic. In the past, these logical constants have been shown to cause the user circuit to output bad data and were not resetable through off-line rcconfiguration. In the more recent devices, logical constants are less problematic, though mitigation should still be considered for high reliability applications. In conclusion, we have presented a number of reliability concerns with logical constants in the Xilinx Virtex family. There are two main categories of logical constants: implicit and explicit logical constants. In all of the Virtex devices, the implicit logical constants are implemented using half latches, which in the most recent devices are several orders of magnitudes smaller than configuration bit cells. Explicit logical constants are implemented exclusively using constant LUTs in the Virtex-I and Virtex-II, and use a combination of constant LUTs and architectural posts to the ground plane in the Virtex-4. We have also presented mitigation methods and options for these devices. While SEUs in implicit and some types of explicit logical constants can cause data corrupt, the chance of failure from these components is now much smaller than it was in the Virtex-I device. Therefore, for many cases, mitigation might not be necessary, except under extremely high reliability situations.

  7. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  8. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  9. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    NASA Astrophysics Data System (ADS)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  10. Design of Low-Cost FPGA Hardware for Real-time ICA-Based Blind Source Separation Algorithm

    NASA Astrophysics Data System (ADS)

    Charoensak, Charayaphan; Sattar, Farook

    2005-12-01

    Blind source separation (BSS) of independent sources from their convolutive mixtures is a problem in many real-world multisensor applications. In this paper, we propose and implement an efficient FPGA hardware architecture for the realization of a real-time BSS. The architecture can be implemented using a low-cost FPGA (field programmable gate array). The architecture offers a good balance between hardware requirement (gate count and minimal clock speed) and separation performance. The FPGA design implements the modified Torkkola's BSS algorithm for audio signals based on ICA (independent component analysis) technique. Here, the separation is performed by implementing noncausal filters, instead of the typical causal filters, within the feedback network. This reduces the required length of the unmixing filters as well as provides better separation and faster convergence. Description of the hardware as well as discussion of some issues regarding the practical hardware realization are presented. Results of various FPGA simulations as well as real-time testing of the final hardware design in real environment are given.

  11. Design and implementation of low power clock gated 64-bit ALU on ultra scale FPGA

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep

    2016-03-01

    64-bit energy efficient Arithmetic and Logic Unit using negative latch based clock gating technique is designed in this paper. The 64-bit ALU is designed using multiplexer based full adder cell. We have designed a 64-bit ALU with a gated clock. We have used negative latch based circuit for generating gated clock. This gated clock is used to control the multiplexer based 64-bit ALU. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. We have achieved 74.07%, 92. 93% and 95.53% reduction in total clock power, 89.73%, 91.35% and 92.85% reduction in I/Os power, 67.14%, 62.84% and 74.34% reduction in dynamic power and 25.47%, 29.05% and 46.13% reduction in total supply power at 20 MHz, 200 MHz and 2 GHz frequency respectively. The power has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.3.

  12. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  13. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  14. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  15. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  16. Fault-Tolerant Sequencer Using FPGA-Based Logic Designs for Space Applications

    DTIC Science & Technology

    2013-12-01

    Comparison of FPGA switch technologies (from [25]) b. Flash Based FPGAs Flash-based FPGAs use a switch matrix formed of floating gate ...output may randomly oscillate between the two values. The behavior also may not be consistent, and a floating signal may cause the gate to produce an...input causes a floating output in the associated gate , which appears as a floating input to the next gate . Unlike a physical circuit in the FPGA

  17. Design Architecture and Initial Results from an FPGA Based Digital Receiver for Multistatic Meteor Measurements

    NASA Astrophysics Data System (ADS)

    Palo, Scott; Vaudrin, Cody

    Defined by a minimal RF front-end followed by an analog-to-digital converter (ADC) and con-trolled by a reconfigurable logic device (FPGA), the digital receiver will replace conventional heterodyning analog receivers currently in use by the COBRA meteor radar. A basic hardware overview touches on the major digital receiver components, theory of operation and data han-dling strategies. We address concerns within the community regarding the implementation of digital receivers in small-scale scientific radars, and outline the numerous benefits with a focus on reconfigurability. From a remote sensing viewpoint, having complete visibility into a band of the EM spectrum allows an experiment designer to focus on parameter estimation rather than hardware limitations. Finally, we show some basic multistatic receiver configurations enabled through GPS time synchronization. Currently, the digital receiver is configured to facilitate range and radial velocity determination of meteors in the MLT region for use with the COBRA meteor radar. Initial measurements from data acquired at Platteville, Colorado and Tierra Del Fuego in Argentina will be presented. We show an improvement in detection rates compared to conventional analog systems. Scientific justification for a digital receiver is clearly made by the presentation of RTI plots created using data acquired from the receiver. These plots reveal an interesting phenomenon concerning vacillating power structures in a select number of meteor trails.

  18. AES Cardless Automatic Teller Machine (ATM) Biometric Security System Design Using FPGA Implementation

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Rifen, A. Aminurdin M.; Helmy Abd Wahab, Mohd

    2016-11-01

    Automated Teller Machine (ATM) is an electronic banking outlet that allows bank customers to complete a banking transactions without the aid of any bank official or teller. Several problems are associated with the use of ATM card such card cloning, card damaging, card expiring, cast skimming, cost of issuance and maintenance and accessing customer account by third parties. The aim of this project is to give a freedom to the user by changing the card to biometric security system to access the bank account using Advanced Encryption Standard (AES) algorithm. The project is implemented using Field Programmable Gate Array (FPGA) DE2-115 board with Cyclone IV device, fingerprint scanner, and Multi-Touch Liquid Crystal Display (LCD) Second Edition (MTL2) using Very High Speed Integrated Circuit Hardware (VHSIC) Description Language (VHDL). This project used 128-bits AES for recommend the device with the throughput around 19.016Gbps and utilized around 520 slices. This design offers a secure banking transaction with a low rea and high performance and very suited for restricted space environments for small amounts of RAM or ROM where either encryption or decryption is performed.

  19. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  20. Design and Implementation of High Frequency Ultrasound Pulsed-Wave Doppler Using FPGA

    PubMed Central

    Hu, Chang-hong; Zhou, Qifa; Shung, K. Kirk

    2009-01-01

    The development of a field-programmable gate array (FPGA)-based pulsed-wave Doppler processing approach in pure digital domain is reported in this paper. After the ultrasound signals are digitized, directional Doppler frequency shifts are obtained with a digital-down converter followed by a low-pass filter. A Doppler spectrum is then calculated using the complex fast Fourier transform core inside the FPGA. In this approach, a pulsed-wave Doppler implementation core with reconfigurable and real-time processing capability is achieved. PMID:18986909

  1. ELPSA as a Lesson Design Framework

    ERIC Educational Resources Information Center

    Lowrie, Tom; Patahuddin, Sitti Maesuri

    2015-01-01

    This paper offers a framework for a mathematics lesson design that is consistent with the way we learn about, and discover, most things in life. In addition, the framework provides a structure for identifying how mathematical concepts and understanding are acquired and developed. This framework is called ELPSA and represents five learning…

  2. Talking about Multimedia: A Layered Design Framework.

    ERIC Educational Resources Information Center

    Taylor, Josie; Sumner, Tamara; Law, Andrew

    1997-01-01

    Describes a layered analytical framework for discussing design and educational issues that can be shared by the many different stakeholders involved in educational multimedia design and deployment. Illustrates the framework using a detailed analysis of the Galapagos Pilot project of the Open University science faculty which examines the processes…

  3. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  4. The role of the asymptotic dynamics in the design of FPGA-based hardware implementations of gIF-type neural networks.

    PubMed

    Rostro-Gonzalez, Horacio; Cessac, Bruno; Girau, Bernard; Torres-Huitzil, Cesar

    2011-01-01

    This paper presents a numerical analysis of the role of asymptotic dynamics in the design of hardware-based implementations of the generalised integrate-and-fire (gIF) neuron models. These proposed implementations are based on extensions of the discrete-time spiking neuron model, which was introduced by Soula et al., and have been implemented on Field Programmable Gate Array (FPGA) devices using fixed-point arithmetic. Mathematical studies conducted by Cessac have evidenced the existence of three main regimes (neural death, periodic and chaotic regimes) in the activity of such neuron models. These activity regimes are characterised in hardware by considering a precision analysis in the design of an architecture for an FPGA-based implementation. The proposed approach, although based on gIF neuron models and FPGA hardware, can be extended to more complex neuron models as well as to different in silico implementations.

  5. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  6. Architectural design for a low cost FPGA-based traffic signal detection system in vehicles

    NASA Astrophysics Data System (ADS)

    López, Ignacio; Salvador, Rubén; Alarcón, Jaime; Moreno, Félix

    2007-05-01

    In this paper we propose an architecture for an embedded traffic signal detection system. Development of Advanced Driver Assistance Systems (ADAS) is one of the major trends of research in automotion nowadays. Examples of past and ongoing projects in the field are CHAMELEON ("Pre-Crash Application all around the vehicle" IST 1999-10108), PREVENT (Preventive and Active Safety Applications, FP6-507075, http://www.prevent-ip.org/) and AVRT in the US (Advanced Vision-Radar Threat Detection (AVRT): A Pre-Crash Detection and Active Safety System). It can be observed a major interest in systems for real-time analysis of complex driving scenarios, evaluating risk and anticipating collisions. The system will use a low cost CCD camera on the dashboard facing the road. The images will be processed by an Altera Cyclone family FPGA. The board does median and Sobel filtering of the incoming frames at PAL rate, and analyzes them for several categories of signals. The result is conveyed to the driver. The scarce resources provided by the hardware require an architecture developed for optimal use. The system will use a combination of neural networks and an adapted blackboard architecture. Several neural networks will be used in sequence for image analysis, by reconfiguring a single, generic hardware neural network in the FPGA. This generic network is optimized for speed, in order to admit several executions within the frame rate. The sequence will follow the execution cycle of the blackboard architecture. The global, blackboard architecture being developed and the hardware architecture for the generic, reconfigurable FPGA perceptron will be explained in this paper. The project is still at an early stage. However, some hardware implementation results are already available and will be offered in the paper.

  7. Intelligent Frameworks for Instructional Design.

    ERIC Educational Resources Information Center

    Spector, J. Michael; And Others

    Many researchers are attempting to develop automated instructional development systems to guide subject matter experts through the lengthy and difficult process of courseware development. Because the targeted users often lack instructional design expertise, a great deal of emphasis has been placed on the use of artificial intelligence (AI) to…

  8. Intelligent Frameworks for Instructional Design.

    ERIC Educational Resources Information Center

    Spector, J. Michael; And Others

    1992-01-01

    Presents a taxonomy describing various uses of artificial intelligence techniques in automated instructional development systems. Instructional systems development is discussed in relation to the design of computer-based instructional courseware; two systems being developed at the Air Force Armstrong Laboratory are reviewed; and further research…

  9. A Parameterized Design Framework for Hardware Implementation of Particle Filters

    DTIC Science & Technology

    2008-03-01

    explore differ- ent design options for implementing two different particle filtering applications on field-programmable gate arrays ( FPGAs ), and we present...associated results on trade-offs between area ( FPGA resource requirements) and execution speed. Index Terms — Field programmable gate arrays, Parallel...programmable gate arrays ( FPGAs ) is proposed to enable comprehensive design space exploration of the whole system with attention to the interaction

  10. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  11. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  12. Design of an oximeter based on LED-LED configuration and FPGA technology.

    PubMed

    Stojanovic, Radovan; Karadaglic, Dejan

    2013-01-04

    A fully digital photoplethysmographic (PPG) sensor and actuator has been developed. The sensing circuit uses one Light Emitting Diode (LED) for emitting light into human tissue and one LED for detecting the reflectance light from human tissue. A Field Programmable Gate Array (FPGA) is used to control the LEDs and determine the PPG and Blood Oxygen Saturation (S(p)O(2)). The configurations with two LEDs and four LEDs are developed for measuring PPG signal and Blood Oxygen Saturation (S(p)O(2)). N-LEDs configuration is proposed for multichannel S(p)O(2) measurements. The approach resulted in better spectral sensitivity, increased and adjustable resolution, reduced noise, small size, low cost and low power consumption.

  13. Structural Analysis in a Conceptual Design Framework

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.

    2012-01-01

    Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.

  14. FPGA Vision Data Architecture

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  15. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  16. FPGA-based design and implementation of arterial pulse wave generator using piecewise Gaussian-cosine fitting.

    PubMed

    Wang, Lu; Xu, Lisheng; Zhao, Dazhe; Yao, Yang; Song, Dan

    2015-04-01

    Because arterial pulse waves contain vital information related to the condition of the cardiovascular system, considerable attention has been devoted to the study of pulse waves in recent years. Accurate acquisition is essential to investigate arterial pulse waves. However, at the stage of developing equipment for acquiring and analyzing arterial pulse waves, specific pulse signals may be unavailable for debugging and evaluating the system under development. To produce test signals that reflect specific physiological conditions, in this paper, an arterial pulse wave generator has been designed and implemented using a field programmable gate array (FPGA), which can produce the desired pulse waves according to the feature points set by users. To reconstruct a periodic pulse wave from the given feature points, a method known as piecewise Gaussian-cosine fitting is also proposed in this paper. Using a test database that contains four types of typical pulse waves with each type containing 25 pulse wave signals, the maximum residual error of each sampling point of the fitted pulse wave in comparison with the real pulse wave is within 8%. In addition, the function for adding baseline drift and three types of noises is integrated into the developed system because the baseline occasionally wanders, and noise needs to be added for testing the performance of the designed circuits and the analysis algorithms. The proposed arterial pulse wave generator can be considered as a special signal generator with a simple structure, low cost and compact size, which can also provide flexible solutions for many other related research purposes.

  17. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.

    PubMed

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2016-10-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

  18. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  19. Effect of framework design on crown failure.

    PubMed

    Bonfante, Estevam A; da Silva, Nelson R F A; Coelho, Paulo G; Bayardo-González, Daniel E; Thompson, Van P; Bonfante, Gerson

    2009-04-01

    This study evaluated the effect of core-design modification on the characteristic strength and failure modes of glass-infiltrated alumina (In-Ceram) (ICA) compared with porcelain fused to metal (PFM). Premolar crowns of a standard design (PFMs and ICAs) or with a modified framework design (PFMm and ICAm) were fabricated, cemented on dies, and loaded until failure. The crowns were loaded at 0.5 mm min(-1) using a 6.25 mm tungsten-carbide ball at the central fossa. Fracture load values were recorded and fracture analysis of representative samples were evaluated using scanning electron microscopy. Probability Weibull curves with two-sided 90% confidence limits were calculated for each group and a contour plot of the characteristic strength was obtained. Design modification showed an increase in the characteristic strength of the PFMm and ICAm groups, with PFM groups showing higher characteristic strength than ICA groups. The PFMm group showed the highest characteristic strength among all groups. Fracture modes of PFMs and of PFMm frequently reached the core interface at the lingual cusp, whereas ICA exhibited bulk fracture through the alumina core. Core-design modification significantly improved the characteristic strength for PFM and for ICA. The PFM groups demonstrated higher characteristic strength than both ICA groups combined.

  20. Designing Educational Software with Students through Collaborative Design Games: The We!Design&Play Framework

    ERIC Educational Resources Information Center

    Triantafyllakos, George; Palaigeorgiou, George; Tsoukalas, Ioannis A.

    2011-01-01

    In this paper, we present a framework for the development of collaborative design games that can be employed in participatory design sessions with students for the design of educational applications. The framework is inspired by idea generation theory and the design games literature, and guides the development of board games which, through the use…

  1. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    NASA Astrophysics Data System (ADS)

    Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.

    2013-02-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  2. FPGA design of box-constrained DCD-based detector for large-scale MIMO systems

    NASA Astrophysics Data System (ADS)

    Quan, Zhi; Zakharov, Yuriy

    2016-07-01

    This paper proposes an improved architecture of a low-complexity box-constrained multiple-input multiple-output (MIMO) detector which is based on the dichotomous coordinate descent (DCD) algorithm. This architecture allows a simple field-programmable gate-array implementation of the detector and explores the parallel implementation to reduce the number of clock cycles required in the design. We investigate the proposed design and compare its detection performance, hardware resources, and convergence speed with that of known designs. It is shown that the proposed design provides improvement in the detection performance compared to the minimum mean square error (MMSE) detector. The numerical results also show that the proposed architecture requires as few as 184, 210, and 223 slices for 16 × 16, 64 × 64, and 128 × 128 MIMO systems, respectively, which is significantly less than that required by known designs of the MMSE detector. By comparing the serial and parallel implementations of the box-constrained detector, we show that the parallel implementation requires 15% fewer clock cycles.

  3. FPGA implemented testbed in 8-by-8 and 2-by-2 OFDM-MIMO channel estimation and design of baseband transceiver.

    PubMed

    Ramesh, S; Seshasayanan, R

    2016-01-01

    In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks.

  4. FPGA Material for the Undergraduate School

    NASA Astrophysics Data System (ADS)

    Yawata, Kazushi

    A set of digital electronis educational material introducing the FPGA is developed and a syllabus is designed for the physics laboratory class in the undergraduate school. The material is developed with Spartan-3 (Xilinx). The syllabus covers the design procedure using ISE with VerilogHDL, a discussion on how the FPGA can realize circuits with the generated RTL and logic level circuit diagrams and observations with an oscilloscope.

  5. FPGA implementation of image enhancement techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Karan; Jain, Aditya; Srivastava, Atul Kumar

    2009-06-01

    The objective of this paper is designing, modeling, simulation and synthesis of four Image Enhancement techniques on FPGA. Image Enhancement Algorithms can be classified as point processing Techniques, in which operation is done on pixel level and Spatial Filtering Technique, in which operation is performed within neighborhood of a pixel. Algorithms of all the techniques are studied and hardware circuits are realized for them. Then hardware logic is modeled in Matlab Simulink using Xilinx System Generator Block set and synthesized onto Virtex4 xc4vsx35-10ff668 FPGA chip. Using hardware co-simulation feature of FPGA kit, the algorithms developed are validated.

  6. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    SciTech Connect

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  7. Research on the design of surface acquisition system of active lap based on FPGA and FX2LP

    NASA Astrophysics Data System (ADS)

    Zhao, Hongshen; Li, Xiaojin; Fan, Bin; Zeng, Zhige

    2014-08-01

    In order to research the dynamic surface shape changes of active lap during the processing, this paper introduces a dynamic surface shape acquisition system of active lap using FPGA and USB communication. This system consists of high-precision micro-displacement sensor array, acquisition board, PC computer composition, and acquisition circuit board includes six sub-boards based on FPGA, a hub-board based on FPGA and USB communication. A sub-board is responsible for a number of independent channel sensors' data acquisition; hub-board is responsible for creating encoder simulation tools to active lap deformation control system with location information, sending synchronization information to latch the sensor data in all of the sub-boards for a time, while addressing the sub-boards to gather the sensor data in each sub-board one by one and transmitting all the sensor data together with location information via the USB chip FX2LP to the host computer. Experimental results show that the system is capable of fixing the location and speed of active lap, meanwhile the control of surface transforming and dynamic surface data acquisition at a certain location in the processing is implemented.

  8. An OER Architecture Framework: Needs and Design

    ERIC Educational Resources Information Center

    Khanna, Pankaj; Basak, P. C.

    2013-01-01

    This paper describes an open educational resources (OER) architecture framework that would bring significant improvements in a well-structured and systematic way to the educational practices of distance education institutions of India. The OER architecture framework is articulated with six dimensions: pedagogical, technological, managerial,…

  9. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  10. RSA Power Analysis Obfuscation: A Dynamic FPGA Architecture

    DTIC Science & Technology

    2012-03-01

    research provides a VHDL coded dynamic architecture for synthesization on a Xilinx Virtex-5 FPGA. This architecture provides two-way communication...Component Under Test (CUT) is the dynamic RSA implementation. This dynamic hardware is synthesized from VHDL onto a Xilinx Virtex-5 FPGA. The built in...The hardware platform used for this research is a the Xil- inx Virtex-5 FX FPGA. VHDL code is synthesized using the Xilinx design suite and downloaded

  11. Novel integrated design framework for radio frequency quadrupoles

    NASA Astrophysics Data System (ADS)

    Jolly, Simon; Easton, Matthew; Lawrie, Scott; Letchford, Alan; Pozimski, Jürgen; Savage, Peter

    2014-01-01

    A novel design framework for Radio Frequency Quadrupoles (RFQs), developed as part of the design of the FETS RFQ, is presented. This framework integrates several previously disparate steps in the design of RFQs, including the beam dynamics design, mechanical design, electromagnetic, thermal and mechanical modelling and beam dynamics simulations. Each stage of the design process is described in detail, including the various software options and reasons for the final software suite selected. Results are given for each of these steps, describing how each stage affects the overall design process, with an emphasis on the resulting design choices for the FETS RFQ.

  12. Building a Framework for Engineering Design Experiences in High School

    ERIC Educational Resources Information Center

    Denson, Cameron D.; Lammi, Matthew

    2014-01-01

    In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…

  13. A New Mathematical Framework for Design Under Uncertainty

    DTIC Science & Technology

    2016-05-05

    DARPA HR0011-14-1-0060 (Final Report): A new mathematical framework for design under uncertainty ( 9/8/14-12/7/15) PI: George Karniadakis, Brown... mathematically rigorous methods to combine these disparate information sources into a viable framework for the purpose of design and optimization. The

  14. A Design Framework for Online Teacher Professional Development Communities

    ERIC Educational Resources Information Center

    Liu, Katrina Yan

    2012-01-01

    This paper provides a design framework for building online teacher professional development communities for preservice and inservice teachers. The framework is based on a comprehensive literature review on the latest technology and epistemology of online community and teacher professional development, comprising four major design factors and three…

  15. A FPGA Implementation of JPEG Baseline Encoder for Wearable Devices.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, an efficient field-programmable gate array (FPGA) implementation of the JPEG baseline image compression encoder is presented for wearable devices in health and wellness applications. In order to gain flexibility in developing FPGA-specific software and balance between real-time performance and resources utilization, A High Level Synthesis (HLS) tool is utilized in our system design. An optimized dataflow configuration with a padding scheme simplifies the timing control for data transfer. Our experiments with a system-on-chip multi-sensor system have verified our FPGA implementation with respect to real-time performance, computational efficiency, and FPGA resource utilization.

  16. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-04-12

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  17. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale

    PubMed Central

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2017-01-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049

  18. Interior Design Education within a Human Ecological Framework

    ERIC Educational Resources Information Center

    Kaup, Migette L.; Anderson, Barbara G.; Honey, Peggy

    2007-01-01

    An education based in human ecology can greatly benefit interior designers as they work to understand and improve the human condition. Design programs housed in colleges focusing on human ecology can improve the interior design profession by taking advantage of their home base and emphasizing the human ecological framework in the design curricula.…

  19. An framework for robust flight control design using constrained optimization

    NASA Technical Reports Server (NTRS)

    Palazoglu, A.; Yousefpor, M.; Hess, R. A.

    1992-01-01

    An analytical framework is described for the design of feedback control systems to meet specified performance criteria in the presence of structured and unstructured uncertainty. Attention is focused upon the linear time invariant, single-input, single-output problem for the purposes of exposition. The framework provides for control of the degree of the stabilizing compensator or controller.

  20. Virtual Reality Hypermedia Design Frameworks for Science Instruction.

    ERIC Educational Resources Information Center

    Maule, R. William; Oh, Byron; Check, Rosa

    This paper reports on a study that conceptualizes a research framework to aid software design and development for virtual reality (VR) computer applications for instruction in the sciences. The framework provides methodologies for the processing, collection, examination, classification, and presentation of multimedia information within hyperlinked…

  1. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Gromov, Konstantin G.; Konefat, Edward H.

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of a Field Programmable Gate Array (FPGA) used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and control storage of telemetry data from multiple sensors throughout launch, ascent, deployment and descent phases of the subsonic parachute test.

  2. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Konefat, Edward H.; Gromovt, Konstantin

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of an FPGA used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and store data from multiple sensors at multiple rates during launch, ascent, deployment and descent phases of the subsonic parachute test.

  3. A Design Framework for Syllabus Generator

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2008-01-01

    A well-designed syllabus provides students with a roadmap for an engaging and successful learning experience, whereas a poorly designed syllabus impedes communication between faculty and students, increases student anxiety and potential complaints, and reduces overall teaching effectiveness. In an effort to facilitate, streamline, and improve…

  4. ICE: A Scalable, Low-Cost FPGA-Based Telescope Signal Processing and Networking System

    NASA Astrophysics Data System (ADS)

    Bandura, K.; Bender, A. N.; Cliche, J. F.; de Haan, T.; Dobbs, M. A.; Gilbert, A. J.; Griffin, S.; Hsyu, G.; Ittah, D.; Parra, J. Mena; Montgomery, J.; Pinsonneault-Marotte, T.; Siegel, S.; Smecher, G.; Tang, Q. Y.; Vanderlinde, K.; Whitehorn, N.

    We present an overview of the ‘ICE’ hardware and software framework that implements large arrays of interconnected field-programmable gate array (FPGA)-based data acquisition, signal processing and networking nodes economically. The system was conceived for application to radio, millimeter and sub-millimeter telescope readout systems that have requirements beyond typical off-the-shelf processing systems, such as careful control of interference signals produced by the digital electronics, and clocking of all elements in the system from a single precise observatory-derived oscillator. A new generation of telescopes operating at these frequency bands and designed with a vastly increased emphasis on digital signal processing to support their detector multiplexing technology or high-bandwidth correlators — data rates exceeding a terabyte per second — are becoming common. The ICE system is built around a custom FPGA motherboard that makes use of an Xilinx Kintex-7 FPGA and ARM-based co-processor. The system is specialized for specific applications through software, firmware and custom mezzanine daughter boards that interface to the FPGA through the industry-standard FPGA mezzanine card (FMC) specifications. For high density applications, the motherboards are packaged in 16-slot crates with ICE backplanes that implement a low-cost passive full-mesh network between the motherboards in a crate, allow high bandwidth interconnection between crates and enable data offload to a computer cluster. A Python-based control software library automatically detects and operates the hardware in the array. Examples of specific telescope applications of the ICE framework are presented, namely the frequency-multiplexed bolometer readout systems used for the South Pole Telescope (SPT) and Simons Array and the digitizer, F-engine, and networking engine for the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX

  5. A design framework for exploratory geovisualization in epidemiology

    PubMed Central

    Robinson, Anthony C.

    2009-01-01

    This paper presents a design framework for geographic visualization based on iterative evaluations of a toolkit designed to support cancer epidemiology. The Exploratory Spatio-Temporal Analysis Toolkit (ESTAT), is intended to support visual exploration through multivariate health data. Its purpose is to provide epidemiologists with the ability to generate new hypotheses or further refine those they may already have. Through an iterative user-centered design process, ESTAT has been evaluated by epidemiologists at the National Cancer Institute (NCI). Results of these evaluations are discussed, and a design framework based on evaluation evidence is presented. The framework provides specific recommendations and considerations for the design and development of a geovisualization toolkit for epidemiology. Its basic structure provides a model for future design and evaluation efforts in information visualization. PMID:20390052

  6. A Framework for the Design of Service Systems

    NASA Astrophysics Data System (ADS)

    Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris

    We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.

  7. Public Key FPGA Software

    SciTech Connect

    Hymel, Ross

    2013-07-25

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  8. Achieving Equivalence: A Transnational Curriculum Design Framework

    ERIC Educational Resources Information Center

    Clarke, Angela; Johal, Terry; Sharp, Kristen; Quinn, Shayna

    2016-01-01

    Transnational education is now essential to university international development strategies. As a result, tertiary educators are expected to engage with the complexities of diverse cultural contexts, different delivery modes, and mixed student cohorts to design quality learning experiences for all. To support this transition we developed a…

  9. Towards a Framework for Professional Curriculum Design

    ERIC Educational Resources Information Center

    Winch, Christopher

    2015-01-01

    Recent reviews of vocational qualifications in England have noted problems with their restricted nature. However, the underlying issue of how to conceptualise professional agency in curriculum design has not been properly addressed, either by the Richard or the Whitehead reviews. Drawing on comparative work in England and Europe it is argued that…

  10. Design and implementation of a multiband digital filter using FPGA to extract the ECG signal in the presence of different interference signals.

    PubMed

    Aboutabikh, Kamal; Aboukerdah, Nader

    2015-07-01

    In this paper, we propose a practical way to synthesize and filter an ECG signal in the presence of four types of interference signals: (1) those arising from power networks with a fundamental frequency of 50Hz, (2) those arising from respiration, having a frequency range from 0.05 to 0.5Hz, (3) muscle signals with a frequency of 25Hz, and (4) white noise present within the ECG signal band. This was done by implementing a multiband digital filter (seven bands) of type FIR Multiband Least Squares using a digital programmable device (Cyclone II EP2C70F896C6 FPGA, Altera), which was placed on an education and development board (DE2-70, Terasic). This filter was designed using the VHDL language in the Quartus II 9.1 design environment. The proposed method depends on Direct Digital Frequency Synthesizers (DDFS) designed to synthesize the ECG signal and various interference signals. So that the synthetic ECG specifications would be closer to actual ECG signals after filtering, we designed in a single multiband digital filter instead of using three separate digital filters LPF, HPF, BSF. Thus all interference signals were removed with a single digital filter. The multiband digital filter results were studied using a digital oscilloscope to characterize input and output signals in the presence of differing sinusoidal interference signals and white noise.

  11. Optoelectronic date acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liu, Chunyang; Song, De; Tong, Zhiguo; Liu, Xiangqing

    2015-11-01

    An optoelectronic date acquisition system is designed based on FPGA. FPGA chip that is EP1C3T144C8 of Cyclone devices from Altera corporation is used as the centre of logic control, XTP2046 chip is used as A/D converter, host computer that communicates with the date acquisition system through RS-232 serial communication interface are used as display device and photo resistance is used as photo sensor. We use Verilog HDL to write logic control code about FPGA. It is proved that timing sequence is correct through the simulation of ModelSim. Test results indicate that this system meets the design requirement, has fast response and stable operation by actual hardware circuit test.

  12. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  13. Learning Experience as Transaction: A Framework for Instructional Design

    ERIC Educational Resources Information Center

    Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.

    2011-01-01

    This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…

  14. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems.

  15. A Comprehensive Learning Event Design Using a Communication Framework

    ERIC Educational Resources Information Center

    Bower, Robert L.

    1975-01-01

    A learning event design for accountability uses a communications framework. The example given is a slide presentation on the invasion of Cuba during the Spanish-American War. Design components include introduction, objectives, media, involvement plans, motivation, bibliography, recapitulation, involvement sheets, evaluation, stimulus-response…

  16. ADC and TDC implemented using FPGA

    SciTech Connect

    Wu, Jinyuan; Hansen, Sten; Shi, Zonghan; /Fermilab

    2007-11-01

    Several tests of FPGA devices programmed as analog waveform digitizers are discussed. The ADC uses the ramping-comparing scheme. A multi-channel ADC can be implemented with only a few resistors and capacitors as external components. A periodic logic levels are shaped by passive RC network to generate exponential ramps. The FPGA differential input buffers are used as comparators to compare the ramps with the input signals. The times at which these ramps cross the input signals are digitized by time-to-digital-converters (TDCs) implemented within the FPGA. The TDC portion of the logic alone has potentially a broad range of HEP/nuclear science applications. A 96-channel TDC card using FPGAs as TDCs being designed for the Fermilab MIPP electronics upgrade project is discussed. A deserializer circuit based on multisampling circuit used in the TDC, the 'Digital Phase Follower' (DPF) is also documented.

  17. Design and Performance Frameworks for Constructing Problem-Solving Simulations

    PubMed Central

    Stevens, Ron; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement. PMID:14506505

  18. Embedding Educational Design Pattern Frameworks into Learning Management Systems

    NASA Astrophysics Data System (ADS)

    Derntl, Michael; Calvo, Rafael A.

    Educational design patterns describe reusable solutions to the design of learning tasks and environments. While there are many projects producing patterns, there are few approaches dealing with supporting the instructor/user in instantiating and running those patterns on learning management systems (LMS). This paper aims to make a leap forward in this direction by presenting two different methods of embedding design pattern frameworks into LMS: (1) Supplying custom LMS components as part of the design patterns, and (2) Configuring existing LMS components based on design patterns. Descriptions of implementations and implications of these methods are provided.

  19. Project Assessment Framework through Design (PAFTD) - A Project Assessment Framework in Support of Strategic Decision Making

    NASA Technical Reports Server (NTRS)

    Depenbrock, Brett T.; Balint, Tibor S.; Sheehy, Jeffrey A.

    2014-01-01

    Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our Project Assessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.

  20. Direct digital synthesis: some options for FPGA implementation

    NASA Astrophysics Data System (ADS)

    Dick, Chris H.; Harris, Fred J.

    1999-08-01

    Direct digital synthesizers (DDS), or numerically controlled oscillators, are a functional requirement of virtually every digital communications system, including modems and software defined radios. Frequency synthesis is commonly realized using application specific standard parts or as software on a DSP processor. With ever increasing amounts of digital signal processing being realized using field programmable gate array (FPGA) based hardware platforms, it is fruitful to explore various DDS architectures and evaluate the many possible architecture/performance tradeoffs with a view to FPGA implementation. This paper describes three DDS architectures and presents several designs that illustrate DDS performance and highlight design considerations for FPGA implementation.

  1. A clothing modeling framework for uniform and armor system design

    NASA Astrophysics Data System (ADS)

    Man, Xiaolin; Swan, Colby C.; Rahmatalla, Salam

    2006-05-01

    In the analysis and design of military uniforms and body armor systems it is helpful to quantify the effects of the clothing/armor system on a wearer's physical performance capabilities. Toward this end, a clothing modeling framework for quantifying the mechanical interactions between a given uniform or body armor system design and a specific wearer performing defined physical tasks is proposed. The modeling framework consists of three interacting modules: (1) a macroscale fabric mechanics/dynamics model; (2) a collision detection and contact correction module; and (3) a human motion module. In the proposed framework, the macroscopic fabric model is based on a rigorous large deformation continuum-degenerated shell theory representation. The collision and contact module enforces non-penetration constraints between the fabric and human body and computes the associated contact forces between the two. The human body is represented in the current framework, as an assemblage of overlapping ellipsoids that undergo rigid body motions consistent with human motions while performing actions such as walking, running, or jumping. The transient rigid body motions of each ellipsoidal body segment in time are determined using motion capture technology. The integrated modeling framework is then exercised to quantify the resistance that the clothing exerts on the wearer during the specific activities under consideration. Current results from the framework are presented and its intended applications are discussed along with some of the key challenges remaining in clothing system modeling.

  2. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  3. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  4. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  5. A Proposed Conceptual Framework for Curriculum Design in Physical Fitness.

    ERIC Educational Resources Information Center

    Miller, Peter V.; Beauchamp, Larry S.

    A physical fitness curriculum, designed to provide cumulative benefits in a sequential pattern, is based upon a framework of a conceptual structure. The curriculum's ultimate goal is the achievement of greater physiological efficiency through a holistic approach that would strengthen circulatory-respiratory, mechanical, and neuro-muscular…

  6. Sustainable Supply Chain Design by the P-Graph Framework

    EPA Science Inventory

    The present work proposes a computer-aided methodology for designing sustainable supply chains in terms of sustainability metrics by resorting to the P-graph framework. The methodology is an outcome of the collaboration between the Office of Research and Development (ORD) of the ...

  7. TARDIS: An Automation Framework for JPL Mission Design and Navigation

    NASA Technical Reports Server (NTRS)

    Roundhill, Ian M.; Kelly, Richard M.

    2014-01-01

    Mission Design and Navigation at the Jet Propulsion Laboratory has implemented an automation framework tool to assist in orbit determination and maneuver design analysis. This paper describes the lessons learned from previous automation tools and how they have been implemented in this tool. In addition this tool has revealed challenges in software implementation, testing, and user education. This paper describes some of these challenges and invites others to share their experiences.

  8. Proposal of ROS-compliant FPGA component for low-power robotic systems

    NASA Astrophysics Data System (ADS)

    Li, Rong; Quan, Lei; Cai, YouLin

    2015-12-01

    In recent years, robots are required to be autonomous and their robotic software are sophisticated. Robots have a problem of insufficient performance, since it cannot equip with a high-performance microprocessor due to battery-power operation. On the other hand, FPGA devices can accelerate specific functions in a robot system without increasing power consumption by implementing customized circuits. But it is difficult to introduce FPGA devices into a robot due to large development cost of an FPGA circuit compared to software. Therefore, in this study, we propose an FPGA component technology for an easy integration of an FPGA into robots, which is compliant with ROS (Robot Operating System). As a case study, we designed ROS-compliant FPGA component of image labeling using Xilinx Zynq platform. The developed ROS-component FPGA component performs 1.7 times faster compared to the ordinary ROS software component.

  9. Electronic-generated holograms by FPGA and monochromatic LCD

    NASA Astrophysics Data System (ADS)

    Castillo-Atoche, A.; Pérez-Cortés, M.; López, M. A.; Ortiz-Gutiérrez, M.

    2006-02-01

    The majority of holograms are made using interference of light and computer-generated holograms. In this work we propose a technique in real time to generate digital holograms with a VLSI digital component, being specific FPGA and a liquid crystal device. The digital design with FPGA presents great advantage for its parallel procesing that carry out by its flexible structure, high integration and velocity. The design was verified using the platform MathLab/Simulink and Xilinx System Generator.

  10. CROC FPGA Firmware

    SciTech Connect

    2009-12-01

    The CROC FPGA firmware code controls the operation of CROC hardware primarily deterinining the location of neutron events and discriminating against false trigger by examining the output of multiple analog comparators. A number of stoical algorithms are encode within the firmware to achieve reliable operation. Other communication and control functions are also part of the firmware.

  11. A framework for the design of ambulance sirens.

    PubMed

    Catchpole, K; McKeown, D

    2007-08-01

    Ambulance sirens are essential for assisting the safe and rapid arrival of an ambulance at the scene of an emergency. In this study, the parameters upon which sirens may be designed were examined and a framework for emergency vehicle siren design was proposed. Validity for the framework was supported through acoustic measurements and the evaluation of ambulance transit times over 240 emergency runs using two different siren systems. Modifying existing siren sounds to add high frequency content would improve vehicle penetration, detectability and sound localization cues, and mounting the siren behind the radiator grill, rather than on the light bar or under the wheel arch, would provide less unwanted noise while maintaining or improving the effective distance in front of the vehicle. Ultimately, these considerations will benefit any new attempt to design auditory warnings for the emergency services.

  12. Study of a Fine Grained Threaded Framework Design

    NASA Astrophysics Data System (ADS)

    Jones, C. D.

    2012-12-01

    Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.

  13. Autonomous Lawnmower using FPGA implementation.

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd

    2016-11-01

    Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.

  14. Recommendations for the optimum design of pultruded frameworks

    NASA Astrophysics Data System (ADS)

    Mottram, J. T.

    1994-09-01

    For the optimum choice of pultruded beam members in frameworks there is a need to have a greater understanding of framework behavior under load. Research on the lateral-torsional buckling of a symmetric I-section has shown how much the resistance may be affected by the loading position and the support boundary conditions. By changing the warping at the connections from free, as assumed in the USA design manual, to fixed, as may be achieved with practical connection designs it is shown that there is a potential doubling in the buckling resistance. In addition, practical connections have some initial stiffness and moment resistance, thus the connections behave in a semirigid manner. This connection behavior makes inappropriate the present procedure for choosing beam sections on the basis of limiting deflection for a simply supported member. It is proposed that research be conducted to establish the potential of semirigid design, as now being used with structural steelwork. Results from such research should provide the first stage in the process for the optimum design of frameworks.

  15. SysSon - A Framework for Systematic Sonification Design

    NASA Astrophysics Data System (ADS)

    Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns

    2015-04-01

    SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.

  16. A Connectivity Framework for Social Information Systems Design in Healthcare

    PubMed Central

    Kuziemsky, Craig E.; Andreev, Pavel; Benyoucef, Morad; O'Sullivan, Tracey; Jamaly, Syam

    2016-01-01

    Social information systems (SISs) will play a key role in healthcare systems’ transformation into collaborative patient-centered systems that support care delivery across the entire continuum of care. SISs enable the development of collaborative networks andfacilitate relationships to integrate people and processes across time and space. However, we believe that a “connectivity” issue, which refers to the scope and extent of system requirements for a SIS, is a significant challenge of SIS design. This paper’s contribution is the development of the Social Information System Connectivity Framework for supporting SIS design in healthcare. The framework has three parts. First, it defines the structure of a SIS as a set of social triads. Second, it identifies six dimensions that represent the behaviour of a SIS. Third, it proposes the Social Information System Connectivity Factor as our approximation of the extent of connectivity and degree of complexity in a SIS. PMID:28269869

  17. FPGA developments for the SPARTA project: Part 2

    NASA Astrophysics Data System (ADS)

    Goodsell, S. J.; Geng, D.; Fedrigo, E.; Soenke, C.; Donaldson, R.; Saunter, C. D.; Myers, R. M.; Basden, A. G.; Dipper, N. A.

    2006-06-01

    The European Southern Observatory (ESO) and Durham University's Centre for Advanced Instrumentation (CfAI) continue to progress the design of a next generation Adaptive Optics (AO) Real-Time Control System (RTCS). This common flexible platform, labelled SPARTA 'Standard Platform for Adaptive optics Real-Time Applications' will control the AO systems for a set of 2 nd generation VLT instrumentation, and will scale to implement the initial AO systems for the European Extremely Large Telescope (E-ELT). Durham has used Field Programmable Gate Arrays (FPGA) to design a front-end Wavefront Sensor (WFS) Processing Unit (WPU) for SPARTA. FPGA devices have been used to alleviate the highly parallel computationally intensive WPS processing task from system processors to increase the obtainable control loop frequency and reduce the computational latency in the control system. The FPGA device reduces WFS frames to gradient vectors before passing the data to the system processors. The FPGA allows the processors to deal with other tasks such as wavefront reconstruction, telemetry and real-time data recording, allowing for more complex adaptive control algorithms to be executed. Durham has design, coded, implemented and tested a FPGA core incorporating the VITA 17.1 standard serial Front Panel Data Port (sFPDP) protocol to allow a data transfer rate of 2.5Gbps -1 from the WFS Controller to the SPARTA platform. This paper overviews the SPARTA WPU requirements and design, the sFPDP FPGA Core and a description of the platform's implementation phase.

  18. Integration framework for design information of electromechanical systems

    NASA Astrophysics Data System (ADS)

    Qureshi, Sohail Mehboob

    The objective of this research is to develop a framework that can be used to provide an integrated view of electromechanical system design information. The framework is intended to provide a platform where various standard and pseudo standard information models such as STEP and IBIS can be integrated to provide an integrated view of design information beyond just part numbers, CAD drawings, or some specific geometry. A database application can make use of this framework to provide reuse of design information fragments including geometry, function, behavior, design procedures, performance specification, design rationale, project management, product characteristics, and configuration and version. An "Integration Core Model" is developed to provide the basis for the integration framework, and also facilitate integration of product and process data for the purpose of archiving integrated design history. There are two major subdivisions of the integration core model: product core model providing the high level structure needed to associate process information to the product data, and process core model providing the generic process information that is needed to capture and organize process information. The process core model is developed using a hybrid of structure-oriented and process-oriented approaches to process modeling. Using such a scheme the process core model is able to represent information such as hierarchies of processes, logical and temporal relationships between various design activities, and relationships between activities and the product data at various levels of abstraction. Based upon the integration core model, an integration methodology is developed to provide a systematic way of integrating various information models. Mapping theorems have been developed to methodically point out the problems that may be encountered during the integration of two information models. The integration core model is validated through a case study. Design information

  19. Design Principles for Covalent Organic Frameworks in Energy Storage Applications.

    PubMed

    Alahakoon, Sampath B; Thompson, Christina M; Occhialini, Gino; Smaldone, Ronald Alexander

    2017-03-16

    Covalent organic frameworks (COFs) are an exciting class of microporous materials that have been explored as energy storage materials for more than a decade. This review will discusses the efforts to develop these materials for applications in gas and electrical power storage. This review will also discuss some of the design strategies for developing the gas sorption properties of COFs and mechanistic studies on their formation.

  20. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  1. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  2. A Portable Laser Photoacoustic Methane Sensor Based on FPGA

    PubMed Central

    Wang, Jianwei; Wang, Huili; Liu, Xianyong

    2016-01-01

    A portable laser photoacoustic sensor for methane (CH4) detection based on a field-programmable gate array (FPGA) is reported. A tunable distributed feedback (DFB) diode laser in the 1654 nm wavelength range is used as an excitation source. The photoacoustic signal processing was implemented by a FPGA device. A small resonant photoacoustic cell is designed. The minimum detection limit (1σ) of 10 ppm for methane is demonstrated. PMID:27657079

  3. A Portable Laser Photoacoustic Methane Sensor Based on FPGA.

    PubMed

    Wang, Jianwei; Wang, Huili; Liu, Xianyong

    2016-09-21

    A portable laser photoacoustic sensor for methane (CH₄) detection based on a field-programmable gate array (FPGA) is reported. A tunable distributed feedback (DFB) diode laser in the 1654 nm wavelength range is used as an excitation source. The photoacoustic signal processing was implemented by a FPGA device. A small resonant photoacoustic cell is designed. The minimum detection limit (1σ) of 10 ppm for methane is demonstrated.

  4. High Precision Digital Frequency Signal Source Based on FPGA

    NASA Astrophysics Data System (ADS)

    Yanbin, SHI; Jian, GUO; Ning, CUI

    The realization method of DDS technology is introduced, and its superior technical characteristics are analyzed in this paper. According to its characteristics, the high accuracy digital frequency signal source based on FPGA is designed. The simulation result indicated, compares with the traditional signal source, this type of signal source realized by the method of FPGA+DDS have many merits such as high precision and fast switch speed, which can satisfies the developing tendency of test facility.

  5. A Human Factors Framework for Payload Display Design

    NASA Technical Reports Server (NTRS)

    Dunn, Mariea C.; Hutchinson, Sonya L.

    1998-01-01

    During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.

  6. Step-by-Step Design of an FPGA-Based Digital Compensator for DC/DC Converters Oriented to an Introductory Course

    ERIC Educational Resources Information Center

    Zumel, P.; Fernandez, C.; Sanz, M.; Lazaro, A.; Barrado, A.

    2011-01-01

    In this paper, a short introductory course to introduce field-programmable gate array (FPGA)-based digital control of dc/dc switching power converters is presented. Digital control based on specific hardware has been at the leading edge of low-medium power dc/dc switching converters in recent years. Besides industry's interest in this topic, from…

  7. When Playing Meets Learning: Methodological Framework for Designing Educational Games

    NASA Astrophysics Data System (ADS)

    Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich

    Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.

  8. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  9. From OO to FPGA :

    SciTech Connect

    Kou, Stephen; Palsberg, Jens; Brooks, Jeffrey

    2012-09-01

    Consumer electronics today such as cell phones often have one or more low-power FPGAs to assist with energy-intensive operations in order to reduce overall energy consumption and increase battery life. However, current techniques for programming FPGAs require people to be specially trained to do so. Ideally, software engineers can more readily take advantage of the benefits FPGAs offer by being able to program them using their existing skills, a common one being object-oriented programming. However, traditional techniques for compiling object-oriented languages are at odds with todays FPGA tools, which support neither pointers nor complex data structures. Open until now is the problem of compiling an object-oriented language to an FPGA in a way that harnesses this potential for huge energy savings. In this paper, we present a new compilation technique that feeds into an existing FPGA tool chain and produces FPGAs with up to almost an order of magnitude in energy savings compared to a low-power microprocessor while still retaining comparable performance and area usage.

  10. An enhanced BSIM modeling framework for selfheating aware circuit design

    NASA Astrophysics Data System (ADS)

    Schleyer, M.; Leuschner, S.; Baumgartner, P.; Mueller, J.-E.; Klar, H.

    2014-11-01

    This work proposes a modeling framework to enhance the industry-standard BSIM4 MOSFET models with capabilities for coupled electro-thermal simulations. An automated simulation environment extracts thermal information from model data as provided by the semiconductor foundry. The standard BSIM4 model is enhanced with a Verilog-A based wrapper module, adding thermal nodes which can be connected to a thermal-equivalent RC network. The proposed framework allows a fully automated extraction process based on the netlist of the top-level design and the model library. A numerical analysis tool is used to control the extraction flow and to obtain all required parameters. The framework is used to model self-heating effects on a fully integrated class A/AB power amplifier (PA) designed in a standard 65 nm CMOS process. The PA is driven with +30 dBm output power, leading to an average temperature rise of approximately 40 °C over ambient temperature.

  11. A computational molecular design framework for crosslinked polymer networks

    PubMed Central

    Eslick, J.C.; Ye, Q.; Park, J.; Topp, E.M.; Spencer, P.; Camarda, K.V.

    2013-01-01

    Crosslinked polymers are important in a very wide range of applications including dental restorative materials. However, currently used polymeric materials experience limited durability in the clinical oral environment. Researchers in the dental polymer field have generally used a time-consuming experimental trial-and-error approach to the design of new materials. The application of computational molecular design (CMD) to crosslinked polymer networks has the potential to facilitate development of improved polymethacrylate dental materials. CMD uses quantitative structure property relations (QSPRs) and optimization techniques to design molecules possessing desired properties. This paper describes a mathematical framework which provides tools necessary for the application of CMD to crosslinked polymer systems. The novel parts of the system include the data structures used, which allow for simple calculation of structural descriptors, and the formulation of the optimization problem. A heuristic optimization method, Tabu Search, is used to determine candidate monomers. Use of a heuristic optimization algorithm makes the system more independent of the types of QSPRs used, and more efficient when applied to combinatorial problems. A software package has been created which provides polymer researchers access to the design framework. A complete example of the methodology is provided for polymethacrylate dental materials. PMID:23904665

  12. Synergy: A language and framework for robot design

    NASA Astrophysics Data System (ADS)

    Katragadda, Lalitesh Kumar

    Due to escalation in complexity, capability and application, robot design is increasingly difficult. A design environment can automate many design tasks, relieving the designer's burden. Prior to robot development, designers compose a robot from existing or custom developed components, simulate performance, optimize configuration and parameters, and write software for the robot. Robot designers customize these facets to the robot using a variety of software ranging from spreadsheets to C code to CAD tools. Valuable resources are expended, and very little of this expertise and development is reusable. This research begins with the premise that a language to comprehensively represent robots is lacking and that the aforementioned design tasks can be automated once such a language exists. This research proposes and demonstrates the following thesis: "A language to represent robots, along with a framework to generate simulations, optimize designs and generate control software, increases the effectiveness of design." Synergy is the software developed in this research to reflect this philosophy. Synergy was prototyped and demonstrated in the context of lunar rover design, a challenging real-world problem with multiple requirements and a broad design space. Synergy was used to automatically optimize robot parameters and select parts to generate effective designs, while meeting constraints of the embedded components and sub-systems. The generated designs are superior in performance and consistency when compared to designs by teams of designers using the same knowledge. Using a single representation, multiple designs are generated for four distinct lunar exploration objectives. Synergy uses the same representation to auto-generate landing simulations and simultaneously generate control software for the landing. Synergy consists of four software agents. A database and spreadsheet agent compiles the design and component information, generating component interconnections and

  13. Ecohydrology frameworks for green infrastructure design and ecosystem service provision

    NASA Astrophysics Data System (ADS)

    Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.

    2014-12-01

    Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.

  14. A Framework for Designing Scaffolds That Improve Motivation and Cognition

    PubMed Central

    Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.

    2013-01-01

    A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b) problem-based learning and scaffolding as one way to help, (c) how scaffolding has strayed from what was originally equal parts motivational and cognitive support, and (d) a conceptual framework for the design of scaffolds that can enhance motivation as well as cognitive outcomes. We propose guidelines for the design of computer-based scaffolds to promote motivation and engagement while students are solving authentic problems. Remaining questions and suggestions for future research are then discussed. PMID:24273351

  15. Seed Design Framework for Mapping SOLiD Reads

    NASA Astrophysics Data System (ADS)

    Noé, Laurent; Gîrdea, Marta; Kucherov, Gregory

    The advent of high-throughput sequencing technologies constituted a major advance in genomic studies, offering new prospects in a wide range of applications. We propose a rigorous and flexible algorithmic solution to mapping SOLiD color-space reads to a reference genome. The solution relies on an advanced method of seed design that uses a faithful probabilistic model of read matches and, on the other hand, a novel seeding principle especially adapted to read mapping. Our method can handle both lossy and lossless frameworks and is able to distinguish, at the level of seed design, between SNPs and reading errors. We illustrate our approach by several seed designs and demonstrate their efficiency.

  16. Multigrid shallow water equations on an FPGA

    NASA Astrophysics Data System (ADS)

    Jeffress, Stephen; Duben, Peter; Palmer, Tim

    2015-04-01

    A novel computing technology for multigrid shallow water equations is investigated. As power consumption begins to constrain traditional supercomputing advances, weather and climate simulators are exploring alternative technologies that achieve efficiency gains through massively parallel and low power architectures. In recent years FPGA implementations of reduced complexity atmospheric models have shown accelerated speeds and reduced power consumption compared to multi-core CPU integrations. We continue this line of research by designing an FPGA dataflow engine for a mulitgrid version of the 2D shallow water equations. The multigrid algorithm couples grids of variable resolution to improve accuracy. We show that a significant reduction of precision in the floating point representation of the fine grid variables allows greater parallelism and thus improved overall peformance while maintaining accurate integrations. Preliminary designs have been constructed by software emulation. Results of the hardware implementation will be presented at the conference.

  17. FPGA Based Reconfigurable ATM Switch Test Bed

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Jones, Robert E.

    1998-01-01

    Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.

  18. Real-time panoramic infrared imaging system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhang, Hao-Jun; Shen, Yong-Ge

    2010-11-01

    During the past decades, signal processing architecture, which is based on FPGA, conventional DSP processor and host computer, is popular for infrared or other electro-optical systems. With the increasing processing requirement, the former architecture starts to show its limitation in several respects. This paper elaborates a solution based on FPGA for panoramic imaging system as our first step of upgrading the processing module to System-on-Chip (SoC) solution. Firstly, we compare this new architecture with the traditional to show its superiority mainly in the video processing ability, reduction in the development workload and miniaturization of the system architecture. Afterwards, this paper provides in-depth description of this imaging system, including the system architecture and its function, and addresses several related issues followed by the future development. FPGA has developed so rapidly during the past years, not only in silicon device but also in the design flow and tools. In the end, we briefly present our future system development and introduce those new design tools to make up the limitation of the traditional FPGA design methodology. The advanced design flow through Simulink and Xilinx System Generator (Sysgen) has been elaborated, which enables engineers to develop sophisticated DSP algorithms and implement them in FPGA more efficiently. It is believed that this new design approach can shorten system design cycle by allowing rapid prototyping and refining design process.

  19. An FPGA-based reconfigurable DDC algorithm

    NASA Astrophysics Data System (ADS)

    Juszczyk, B.; Kasprowicz, G.

    2016-09-01

    This paper describes implementation of reconfigurable digital down converter in an FPGA structure. System is designed to work with quadrature signals. One of the main criteria of the project was to provied wide range of reconfiguration in order to fulfill various application rage. Potential applications include: software defined radio receiver, passive noise radars and measurement data compression. This document contains general system overview, short description of hardware used in the project and gateware implementation.

  20. Screen Design Guidelines for Motivation in Interactive Multimedia Instruction: A Survey and Framework for Designers.

    ERIC Educational Resources Information Center

    Lee, Sung Heum; Boling, Elizabeth

    1999-01-01

    Identifies guidelines from the literature relating to screen design and design of interactive instructional materials. Describes two types of guidelines--those aimed at enhancing motivation and those aimed at preventing loss of motivation--for typography, graphics, color, and animation and audio. Proposes a framework for considering motivation in…

  1. A Robust Control Design Framework for Substructure Models

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    1994-01-01

    A framework for designing control systems directly from substructure models and uncertainties is proposed. The technique is based on combining a set of substructure robust control problems by an interface stiffness matrix which appears as a constant gain feedback. Variations of uncertainties in the interface stiffness are treated as a parametric uncertainty. It is shown that multivariable robust control can be applied to generate centralized or decentralized controllers that guarantee performance with respect to uncertainties in the interface stiffness, reduced component modes and external disturbances. The technique is particularly suited for large, complex, and weakly coupled flexible structures.

  2. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  3. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Pellish, Jonathan

    2015-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  4. New Developments in FPGA Devices: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  5. FPGA Implementation of Reed-Solomon Decoder for IEEE 802.16 WiMAX Systems using Simulink-Sysgen Design Environment

    SciTech Connect

    Bobrek, Miljko; Albright, Austin P

    2012-01-01

    This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.

  6. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  7. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  8. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.

  9. Design framework for entanglement-distribution switching networks

    NASA Astrophysics Data System (ADS)

    Drost, Robert J.; Brodsky, Michael

    2016-09-01

    The distribution of quantum entanglement appears to be an important component of applications of quantum communications and networks. The ability to centralize the sourcing of entanglement in a quantum network can provide for improved efficiency and enable a variety of network structures. A necessary feature of an entanglement-sourcing network node comprising several sources of entangled photons is the ability to reconfigurably route the generated pairs of photons to network neighbors depending on the desired entanglement sharing of the network users at a given time. One approach to such routing is the use of a photonic switching network. The requirements for an entanglement distribution switching network are less restrictive than for typical conventional applications, leading to design freedom that can be leveraged to optimize additional criteria. In this paper, we present a mathematical framework defining the requirements of an entanglement-distribution switching network. We then consider the design of such a switching network using a number of 2 × 2 crossbar switches, addressing the interconnection of these switches and efficient routing algorithms. In particular, we define a worst-case loss metric and consider 6 × 6, 8 × 8, and 10 × 10 network designs that optimize both this metric and the number of crossbar switches composing the network. We pay particular attention to the 10 × 10 network, detailing novel results proving the optimality of the proposed design. These optimized network designs have great potential for use in practical quantum networks, thus advancing the concept of quantum networks toward reality.

  10. An Integrated Framework Advancing Membrane Protein Modeling and Design

    PubMed Central

    Weitzner, Brian D.; Duran, Amanda M.; Tilley, Drew C.; Elazar, Assaf; Gray, Jeffrey J.

    2015-01-01

    Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167

  11. 3D FFTs on a Single FPGA.

    PubMed

    Humphries, Benjamin; Zhang, Hansen; Sheng, Jiayi; Landaverde, Raphael; Herbordt, Martin C

    2014-05-01

    The 3D FFT is critical in many physical simulations and image processing applications. On FPGAs, however, the 3D FFT was thought to be inefficient relative to other methods such as convolution-based implementations of multi-grid. We find the opposite: a simple design, operating at a conservative frequency, takes 4μs for 16(3), 21μs for 32(3), and 215μs for 64(3) single precision data points. The first two of these compare favorably with the 25μs and 29μs obtained running on a current Nvidia GPU. Some broader significance is that this is a critical piece in implementing a large scale FPGA-based MD engine: even a single FPGA is capable of keeping the FFT off of the critical path for a large fraction of possible MD simulations.

  12. FPGA Flash Memory High Speed Data Acquisition

    NASA Technical Reports Server (NTRS)

    Gonzalez, April

    2013-01-01

    The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.

  13. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; Rogez, Francois; Rosen, Paul; Shah, Biren; Taft, Stephanie

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  14. INSTITUTIONALIZING SAFEGUARDS-BY-DESIGN: HIGH-LEVEL FRAMEWORK

    SciTech Connect

    Trond Bjornard PhD; Joseph Alexander; Robert Bean; Brian Castle; Scott DeMuth, Ph.D.; Phillip Durst; Michael Ehinger; Prof. Michael Golay, Ph.D.; Kevin Hase, Ph.D.; David J. Hebditch, DPhil; John Hockert, Ph.D.; Bruce Meppen; James Morgan; Jerry Phillips, Ph.D., PE

    2009-02-01

    participation in facility design options analysis in the conceptual design phase to enhance intrinsic features, among others. The SBD process is unlikely to be broadly applied in the absence of formal requirements to do so, or compelling evidence of its value. Neither exists today. A formal instrument to require the application of SBD is needed and would vary according to both the national and regulatory environment. Several possible approaches to implementation of the requirements within the DOE framework are explored in this report. Finally, there are numerous barriers to the implementation of SBD, including the lack of a strong safeguards culture, intellectual property concerns, the sensitive nature of safeguards information, and the potentially divergent or conflicting interests of participants in the process. In terms of SBD implementation in the United States, there are no commercial nuclear facilities that are under IAEA safeguards. Efforts to institutionalize SBD must address these issues. Specific work in FY09 could focus on the following: finalizing the proposed SBD process for use by DOE and performing a pilot application on a DOE project in the planning phase; developing regulatory options for mandating SBD; further development of safeguards-related design guidance, principles and requirements; development of a specific SBD process tailored to the NRC environment; and development of an engagement strategy for the IAEA and other international partners.

  15. Analysis and System Design Framework for Infrared Spatial Heterodyne Spectrometers

    SciTech Connect

    Cooke, B.J.; Smith, B.W.; Laubscher, B.E.; Villeneuve, P.V.; Briles, S.D.

    1999-04-05

    The authors present a preliminary analysis and design framework developed for the evaluation and optimization of infrared, Imaging Spatial Heterodyne Spectrometer (SHS) electro-optic systems. Commensurate with conventional interferometric spectrometers, SHS modeling requires an integrated analysis environment for rigorous evaluation of system error propagation due to detection process, detection noise, system motion, retrieval algorithm and calibration algorithm. The analysis tools provide for optimization of critical system parameters and components including : (1) optical aperture, f-number, and spectral transmission, (2) SHS interferometer grating and Littrow parameters, and (3) image plane requirements as well as cold shield, optical filtering, and focal-plane dimensions, pixel dimensions and quantum efficiency, (4) SHS spatial and temporal sampling parameters, and (5) retrieval and calibration algorithm issues.

  16. FPGA-based Hyperspectral Covariance Coprocessor for Size, Weight, and Power Constrained Platforms

    NASA Astrophysics Data System (ADS)

    Kusinsky, David Alan

    Hyperspectral imaging (HSI) is a method of remote sensing that collects many two-dimensional images of the same physical scene. Each image corresponds to a single wavelength band in the electromagnetic spectrum. The number of bands imaged by an HSI sensor can be several hundred, and therefore a large amount of data is produced. This data must be handled by the platform on which the HSI sensor resides, either through onboard processing, or relaying elsewhere. Hence, the platform plays an important role in defining the capabilities of the entire remote sensing system. Size, weight, and power (SWaP) are important factors in the design of any remote sensing platform. These remote sensing platforms, such as Unmanned Air Vehicles and microsatellites, are continually decreasing in size. This creates a need for remote sensing and image processing hardware that consumes less area, weight, and power, while delivering processing performance. The purpose of this research is to design and characterize an FPGA-based hardware coprocessor that parallelizes the calculation of covariance; a time-consuming step common in hyperspectral image processing. The goal is to deploy such a coprocessor on a remote sensing platform. The coprocessor is implemented using a Xilinx ML605 evaluation board. The hardware used includes the Xilinx Virtex-6 FPGA, DDR3 memory, and PCIe interface. An implementation to accelerate the covariance calculation was created, and the OpenCPI open source framework was adopted to enable DDR3 memory and PCIe capabilities and ease coprocessor testing. The coprocessor's performance is evaluated using several metrics: total power (Watts), processing energy (Joules), floating point operations per Watt (FLOPS/W), and floating point operations per Watt-kg (FLOPS/(W·kg)). The coprocessor is compared to a CPU-based processing platform and shown to have an overall SWaP advantage. Coprocessor FLOPS/W and FLOPS/(W·kg) performance is 2X and 2.75X that of the CPU-based platform

  17. SEU mitigation strategies for SRAM-based FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Pei; Zhang, Jian

    2011-08-01

    The type of Field Programmable Gate Arrays (FPGAs) technology and device family used in a design is a key factor for system reliability. Though antifuse-based FPGAs are widely used in aerospace because of their high reliability, current antifuse-based FPGA devices are expensive and leave no room for mistakes or changes since they are not reprogrammable. The substitute for antifuse-based FPGAs are needed in aerospace design, they should be both reprogrammable and highly reliable to Single Event Upset effects (SEUs). SRAM-based FPGAs are widely and systematically used in complex embedding digital systems both in a single chip industry and commercial applications. They are reprogrammable and high in density because of the smaller SRAM cells and logic structures. But the SRAM-based FPGAs are especially sensitive to cosmic radiation because the configuration information is stored in SRAM memory. The ideal FPGA for aerospace use should be high-density SRAM-based which is also insensitive to cosmic radiation induced SEUs. Therefore, in order to enable the use of SRAM-based FPGAs in safety critical applications, new techniques and strategies are essential to mitigate the SEU errors in such devices. In order to improve the reliability of SRAM-based FPGAs which are very sensitive to SEU errors, techniques such as reconfiguration and Triple Module Redundancy (TMR) are widely used in the aerospace electronic systems to mitigate the SEU and Single Event Functional Interrupt (SEFI) errors. Compared to reconfiguration and triplication, scrubbing and partial reconfiguration will utilize fewer or even no internal resources of FPGA. What's more, the detection and repair process can detect and correct SEU errors in configuration memories of the FPGA without affecting or interrupting the proper working of the system while reconfiguration would terminate the operation of the FPGA. This paper presents a payload system realized on Xilinx Virtex-4 FPGA which mitigates SEU effects in the

  18. Framework for Implementing Engineering Senior Design Capstone Courses and Design Clinics

    ERIC Educational Resources Information Center

    Franchetti, Matthew; Hefzy, Mohamed Samir; Pourazady, Mehdi; Smallman, Christine

    2012-01-01

    Senior design capstone projects for engineering students are essential components of an undergraduate program that enhances communication, teamwork, and problem solving skills. Capstone projects with industry are well established in management, but not as heavily utilized in engineering. This paper outlines a general framework that can be used by…

  19. Architectural Design and the Learning Environment: A Framework for School Design Research

    ERIC Educational Resources Information Center

    Gislason, Neil

    2010-01-01

    This article develops a theoretical framework for studying how instructional space, teaching and learning are related in practice. It is argued that a school's physical design can contribute to the quality of the learning environment, but several non-architectural factors also determine how well a given facility serves as a setting for teaching…

  20. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.

  1. Region-Oriented Placement Algorithm for Coarse-Grained Power-Gating FPGA Architecture

    NASA Astrophysics Data System (ADS)

    Li, Ce; Dong, Yiping; Watanabe, Takahiro

    An FPGA plays an essential role in industrial products due to its fast, stable and flexible features. But the power consumption of FPGAs used in portable devices is one of critical issues. Top-down hierarchical design method is commonly used in both ASIC and FPGA design. But, in the case where plural modules are integrated in an FPGA and some of them might be in sleep-mode, current FPGA architecture cannot be fully effective. In this paper, coarse-grained power gating FPGA architecture is proposed where a whole area of an FPGA is partitioned into several regions and power supply is controlled for each region, so that modules in sleep mode can be effectively power-off. We also propose a region oriented FPGA placement algorithm fitted to this user's hierarchical design based on VPR[1]. Simulation results show that this proposed method could reduce power consumption of FPGA by 38% on average by setting unused modules or regions in sleep mode.

  2. A computational framework to empower probabilistic protein design

    PubMed Central

    Fromer, Menachem; Yanover, Chen

    2008-01-01

    Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717

  3. FPGA Boot Loader and Scrubber

    NASA Technical Reports Server (NTRS)

    Wade, Randall S.; Jones, Bailey

    2009-01-01

    A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").

  4. Reusable rocket engine intelligent control system framework design, phase 2

    NASA Technical Reports Server (NTRS)

    Nemeth, ED; Anderson, Ron; Ols, Joe; Olsasky, Mark

    1991-01-01

    Elements of an advanced functional framework for reusable rocket engine propulsion system control are presented for the Space Shuttle Main Engine (SSME) demonstration case. Functional elements of the baseline functional framework are defined in detail. The SSME failure modes are evaluated and specific failure modes identified for inclusion in the advanced functional framework diagnostic system. Active control of the SSME start transient is investigated, leading to the identification of a promising approach to mitigating start transient excursions. Key elements of the functional framework are simulated and demonstration cases are provided. Finally, the advanced function framework for control of reusable rocket engines is presented.

  5. Small Microprocessor for ASIC or FPGA Implementation

    NASA Technical Reports Server (NTRS)

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  6. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  7. A Hierarchical Biology Concept Framework: A Tool for Course Design

    PubMed Central

    Khodor, Julia; Halme, Dina Gould; Walker, Graham C.

    2004-01-01

    A typical undergraduate biology curriculum covers a very large number of concepts and details. We describe the development of a Biology Concept Framework (BCF) as a possible way to organize this material to enhance teaching and learning. Our BCF is hierarchical, places details in context, nests related concepts, and articulates concepts that are inherently obvious to experts but often difficult for novices to grasp. Our BCF is also cross-referenced, highlighting interconnections between concepts. We have found our BCF to be a versatile tool for design, evaluation, and revision of course goals and materials. There has been a call for creating Biology Concept Inventories, multiple-choice exams that test important biology concepts, analogous to those in physics, astronomy, and chemistry. We argue that the community of researchers and educators must first reach consensus about not only what concepts are important to test, but also how the concepts should be organized and how that organization might influence teaching and learning. We think that our BCF can serve as a catalyst for community-wide discussion on organizing the vast number of concepts in biology, as a model for others to formulate their own BCFs and as a contribution toward the creation of a comprehensive BCF. PMID:15257339

  8. An Instructional Design Framework for Fostering Student Engagement in Online Learning Environments

    ERIC Educational Resources Information Center

    Czerkawski, Betul C.; Lyman, Eugene W.

    2016-01-01

    Many approaches, models and frameworks exist when designing quality online learning environments. These approaches assist and guide instructional designers through the process of analysis, design, development, implementation and evaluation of instructional processes. Some of these frameworks are concerned with student participation, some with…

  9. A programmable controller based on CAN field bus embedded microprocessor and FPGA

    NASA Astrophysics Data System (ADS)

    Cai, Qizhong; Guo, Yifeng; Chen, Wenhei; Wang, Mingtao

    2008-10-01

    One kind of new programmable controller(PLC) is introduced in this paper. The advanced embedded microprocessor and Field-Programmable Gate Array (FPGA) device are applied in the PLC system. The PLC system structure was presented in this paper. It includes 32 bits Advanced RISC Machines (ARM) embedded microprocessor as control core, FPGA as control arithmetic coprocessor and CAN bus as data communication criteria protocol connected the host controller and its various extension modules. It is detailed given that the circuits and working principle, IiO interface circuit between ARM and FPGA and interface circuit between ARM and FPGA coprocessor. Furthermore the interface circuit diagrams between various modules are written. In addition, it is introduced that ladder chart program how to control the transfer info of control arithmetic part in FPGA coprocessor. The PLC, through nearly two months of operation to meet the design of the basic requirements.

  10. Rethinking modeling framework design: object modeling system 3.0

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Object Modeling System (OMS) is a framework for environmental model development, data provisioning, testing, validation, and deployment. It provides a bridge for transferring technology from the research organization to the program delivery agency. The framework provides a consistent and efficie...

  11. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  12. A low-power wave union TDC implemented in FPGA

    SciTech Connect

    Wu, Jinyuan; Shi, Yanchen; Zhu, Douglas; /Illinois Math. Sci. Acad.

    2011-10-01

    A low-power time-to-digital convertor (TDC) for an application inside a vacuum has been implemented based on the Wave Union TDC scheme in a low-cost field programmable gate array (FPGA) device. Bench top tests have shown that a time measurement resolution better than 30 ps (standard deviation of time differences between two channels) is achieved. Special firmware design practices are taken to reduce power consumption. The measurements indicate that with 32 channels fitting in the FPGA device, the power consumption on the FPGA core voltage is approximately 9.3 mW/channel and the total power consumption including both core and I/O banks is less than 27 mW/channel.

  13. A Multi-Gigabit Parallel Demodulator and Its FPGA Implementation

    NASA Astrophysics Data System (ADS)

    Lin, Changxing; Zhang, Jian; Shao, Beibei

    This letter presents the architecture of multi-gigabit parallel demodulator suitable for demodulating high order QAM modulated signal and easy to implement on FPGA platform. The parallel architecture is based on frequency domain implementation of matched filter and timing phase correction. Parallel FIFO based delete-keep algorithm is proposed for timing synchronization, while a kind of reduced constellation phase-frequency detector based parallel decision feedback PLL is designed for carrier synchronization. A fully pipelined parallel adaptive blind equalization algorithm is also proposed. Their parallel implementation structures suitable for FPGA platform are investigated. Besides, in the demonstration of 2Gbps demodulator for 16QAM modulation, the architecture is implemented and validated on a Xilinx V6 FPGA platform with performance loss less than 2dB.

  14. A digital pulsar backend based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Jin-Tao; Chen, Lan; Han, Jin-Lin; Esamdin, Ali; Wu, Ya-Jun; Li, Zhi-Xuan; Hao, Long-Fei; Zhang, Xiu-Zhong

    2017-01-01

    A digital pulsar backend based on a Field Programmable Gate Array (FPGA) is developed. It is designed for incoherent de-dispersion of pulsar observations and has a maximum bandwidth of 512 MHz. The channel bandwidth is fixed to 1 MHz, and the highest time resolution is 10 {{μ }} s. Testing observations were carried out using the Urumqi 25-m telescope administered by Xinjiang Astronomical Observatory and the Kunming 40-m telescope administered by Yunnan Observatories, targeting PSR J0332+5434 in the L band and PSR J0437–4715 in the S band, respectively. The successful observation of PSR J0437–4715 demonstrates its ability to observe millisecond pulsars.

  15. FPGA based fast synchronous serial multi-wire links synchronization

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2013-10-01

    The paper debates synchronization method of multi-wire, serial link of constant latency, by means of pseudo-random numbers generators. The solution was designed for various families of FPGA circuits. There were debated synchronization algorithm and functional structure of parameterized transmitter and receiver modules. The modules were realized in VHDL language in a behavioral form.

  16. Reconfigurable Computing for Embedded Systems, FPGA Devices and Software Components

    DTIC Science & Technology

    2007-11-02

    Reconfigurable Computing for Embedded Systems, FPGA Devices and Software Components Graham Bardouleau and James Kulp Mercury Computer Systems... Mercury Computer Systems. This paper describes the approach taken at Mercury to develop such a middleware and framework that supports the execution...ORGANIZATION NAME(S) AND ADDRESS(ES) Mercury Computer Systems, Inc. 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND

  17. Development and Application of a Systems Engineering Framework to Support Online Course Design and Delivery

    ERIC Educational Resources Information Center

    Bozkurt, Ipek; Helm, James

    2013-01-01

    This paper develops a systems engineering-based framework to assist in the design of an online engineering course. Specifically, the purpose of the framework is to provide a structured methodology for the design, development and delivery of a fully online course, either brand new or modified from an existing face-to-face course. The main strength…

  18. Unified Simulation and Analysis Framework for Deep Space Navigation Design

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan; Chuang, Jason; Olsen, Carrie

    2013-01-01

    As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.

  19. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  20. Design Framework for an Adaptive MOOC Enhanced by Blended Learning: Supplementary Training and Personalized Learning for Teacher Professional Development

    ERIC Educational Resources Information Center

    Gynther, Karsten

    2016-01-01

    The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…

  1. Performance analysis and acceleration of cross-correlation computation using FPGA implementation for digital signal processing

    NASA Astrophysics Data System (ADS)

    Selma, R.

    2016-09-01

    Paper describes comparison of cross-correlation computation speed of most commonly used computation platforms (CPU, GPU) with an FPGA-based design. It also describes the structure of cross-correlation unit implemented for testing purposes. Speedup of computations was achieved using FPGA-based design, varying between 16 and 5400 times compared to CPU computations and between 3 and 175 times compared to GPU computations.

  2. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  3. Analysis of an innovative user threshold programmable photoreceiver monolithically integrated in a multitechnology field programmable gate array (MT-FPGA)

    NASA Astrophysics Data System (ADS)

    Mal, Prosenjit; Bhadri, Prashant R.; Beyette, Fred R., Jr.

    2004-10-01

    In the past decade, Field Programmable Gate Arrays (FPGA) has significantly influenced the landscape of the electronic industry. In particular, in the areas of semiconductor manufacturing, CAD tool designs and a wide range of digital logic applications. Primarily, research efforts in the FPGA community have concentrated on improving the reconfigurability or programmability of present day architecture for digital applications. However, the digital nature of FPGA technologies limits their applicability to a wide range of applications that depend on analog circuitry, photonic and RF based technologies. As with any ASIC design, the turn-around time between design iterations may be several months which is prohibitively long for multi-technology test-bed systems where the system designer depends on a rapid prototyping/experimentation environment that allows for optimization of processing algorithms and system architecture. Therefore, we developed innovative FPGA architecture that merges conventional FPGA technology with mixed signal and other multi-technology device. In this paper we discuss the Multi-Technology-FPGA (MT-FPGA) architecture that allows the user to have flexible rapid prototyping environment and provides him or her with the benefits of a conventional FPGA in a mixed signal domain. We substantiate this concept by implementing this architecture in TSMC 0.35 μm process and discussing the results of a variable threshold optical receiver circuit suitable for photonic information processing.

  4. The FPGA realization of a real-time Bayer image restoration algorithm with better performance

    NASA Astrophysics Data System (ADS)

    Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu

    2014-11-01

    Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.

  5. MicroBlaze implementation of GPS/INS integrated system on Virtex-6 FPGA.

    PubMed

    Bhogadi, Lokeswara Rao; Gottapu, Sasi Bhushana Rao; Konala, Vvs Reddy

    2015-01-01

    The emphasis of this paper is on MicroBlaze implementation of GPS/INS integrated system on Virtex-6 field programmable gate array (FPGA). Issues related to accuracy of position, resource usage of FPGA in terms of slices, DSP48, block random access memory, computation time, latency and power consumption are presented. An improved design of a loosely coupled GPS/INS integrated system is described in this paper. The inertial navigation solution and Kalman filter computations are provided by the MicroBlaze on Virtex-6 FPGA. The real time processed navigation solutions are updated with a rate of 100 Hz.

  6. US Army Research Laboratory Visualization Framework Design Document

    DTIC Science & Technology

    2016-01-01

    mechanism. The framework provides for automated discovery of probes by the visualization without prior knowledge of the probes. This report documents... Discovery Message Header 13 7.3 Connection Request Message Header 14 7.4 Use Cases 14 8. Controller/Daemon Interface 16 Approved for public release...rebroadcast discovery messages from other agents, enabling discovery of multiple modules through a single configuration agent. • EventListener

  7. Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework

    PubMed Central

    Peeples, Malinda M; Anthony Kouyaté, Robin C

    2016-01-01

    Background Mobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes. Objective The purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease. Methods The process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described. Results The result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management. Conclusions The framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to

  8. A design thinking framework for healthcare management and innovation.

    PubMed

    Roberts, Jess P; Fisher, Thomas R; Trowbridge, Matthew J; Bent, Christine

    2016-03-01

    The business community has learned the value of design thinking as a way to innovate in addressing people's needs--and health systems could benefit enormously from doing the same. This paper lays out how design thinking applies to healthcare challenges and how systems might utilize this proven and accessible problem-solving process. We show how design thinking can foster new approaches to complex and persistent healthcare problems through human-centered research, collective and diverse teamwork and rapid prototyping. We introduce the core elements of design thinking for a healthcare audience and show how it can supplement current healthcare management, innovation and practice.

  9. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  10. "Light Green Doesn't Mean Hydrology!": Toward a Visual-Rhetorical Framework for Interface Design.

    ERIC Educational Resources Information Center

    Spinuzzi, Clay

    2001-01-01

    Examines metaphor's limitations as a visual-rhetorical framework for designing, evaluating, and critiquing user interfaces. Outlines an alternate framework for visual rhetoric, that of genre ecologies, and discusses how it avoids some of the limitations of metaphor. Uses an empirical study of computer users to illustrate the genre-ecology…

  11. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    ERIC Educational Resources Information Center

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-01-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a…

  12. An Integration of "Backwards Planning" Unit Design with the "Two-Step" Lesson Planning Framework

    ERIC Educational Resources Information Center

    Jones, Karrie A.; Vermette, Paul J.; Jones, Jennifer L.

    2009-01-01

    Planning engaging and effective lessons for middle and high school learners is one of the fundamental components of successful secondary teaching (Skowron 2001; Butt, 2006). While Wiggins & McTighe (1998) have set forth a framework for "backwards planning" in unit design, this article provides a framework for employing backwards planning in…

  13. Investigating the Reading Practices of EFL Yemeni Students Using the Learning by Design Framework

    ERIC Educational Resources Information Center

    Bhooth, Abdullah Mohammad; Azman, Hazita; Ismail, Kemboja

    2015-01-01

    This article investigates the reading practices of 45 EFL Yemeni students using the "learning by design" framework. The framework organizes the teaching and learning of literacy into four processes: experiencing, conceptualising, analysing, and applying. Quantitative and qualitative methods were used to collect data on a sample of…

  14. Adventure Learning and Learner-Engagement: Frameworks for Designers and Educators

    ERIC Educational Resources Information Center

    Henrickson, Jeni; Doering, Aaron

    2013-01-01

    There is a recognized need for theoretical frameworks that can guide designers and educators in the development of engagement-rich learning experiences that incorporate emerging technologies in pedagogically sound ways. This study investigated one such promising framework, adventure learning (AL). Data were gathered via surveys, interviews, direct…

  15. A Conceptual Framework for Educational Design at Modular Level to Promote Transfer of Learning

    ERIC Educational Resources Information Center

    Botma, Yvonne; Van Rensburg, G. H.; Coetzee, I. M.; Heyns, T.

    2015-01-01

    Students bridge the theory-practice gap when they apply in practice what they have learned in class. A conceptual framework was developed that can serve as foundation to design for learning transfer at modular level. The framework is based on an adopted and adapted systemic model of transfer of learning, existing learning theories, constructive…

  16. A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets

    ERIC Educational Resources Information Center

    Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.

    2013-01-01

    This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…

  17. Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course

    ERIC Educational Resources Information Center

    McGowan, Ian S.

    2016-01-01

    Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…

  18. A Framework for the Design and Integration of Collaborative Classroom Games

    ERIC Educational Resources Information Center

    Echeverria, Alejandro; Garcia-Campo, Cristian; Nussbaum, Miguel; Gil, Francisca; Villalta, Marco; Amestica, Matias; Echeverria, Sebastian

    2011-01-01

    The progress registered in the use of video games as educational tools has not yet been successfully transferred to the classroom. In an attempt to close this gap, a framework was developed that assists in the design and classroom integration of educational games. The framework addresses both the educational dimension and the ludic dimension. The…

  19. Partial reconfiguration of concurrent logic controllers implemented in FPGA devices

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Stefanowicz, Łukasz

    2016-12-01

    Reconfigurable systems are recently used in many domains. Although the concept of multi-context logic controllers is relatively new, it may be noticed that the subject is receiving a lot of attention, especially in the industry. The work constitutes a stepping stone in design of reconfigurable logic controllers implemented in an FPGA device. An approach of designing of logic controllers oriented for further partial reconfiguration is proposed. A case study of a milling machine is used for an illustration.

  20. Designing School Accountability Systems: Towards a Framework and Process.

    ERIC Educational Resources Information Center

    Gong, Brian

    This document presents three different views of accountability to address state needs as their departments of education design, improve, or review their state accountability and reporting systems. The first of three sections presents the system-design decision process as a linear sequence of ten steps from defining the purposes of the…

  1. An Exposition of Current Mobile Learning Design Guidelines and Frameworks

    ERIC Educational Resources Information Center

    Teall, Ed; Wang, Minjuan; Callaghan, Vic; Ng, Jason W. P.

    2014-01-01

    As mobile devices with wireless access become more readily available, learning delivered via mobile devices of all types must be designed to ensure successful learning. This paper first examines three questions related to the design of mobile learning: 1) what mobile learning (m-learning) guidelines can be identified in the current literature, 2)…

  2. Adapting the Mathematical Task Framework to Design Online Didactic Objects

    ERIC Educational Resources Information Center

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-01-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where…

  3. A Framework for Web 2.0 Learning Design

    ERIC Educational Resources Information Center

    Bower, Matt; Hedberg, John G.; Kuswara, Andreas

    2010-01-01

    This paper describes an approach to conceptualising and performing Web 2.0-enabled learning design. Based on the Technological, Pedagogical and Content Knowledge model of educational practice, the approach conceptualises Web 2.0 learning design by relating Anderson and Krathwohl's Taxonomy of Learning, Teaching and Assessing, and different types…

  4. Design and Performance Frameworks for Constructing Problem-Solving Simulations

    ERIC Educational Resources Information Center

    Stevens, Rons; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks…

  5. Active FPGA Security Through Decoy Circuits

    DTIC Science & Technology

    2006-03-01

    FPGA and is reported in the units provided by the FPGA software that converts a circuit schematic and/or VHDL code to an FPGA programming file. Power...described by truth or state tables and by Boolean Equations, in a gate-level representation, and in existing VHDL code are provided. The method for...The following is the VHDL code for a Combination Lock with eight states and three inputs. -- original state machine code from Doug Hodson’s -- L

  6. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  7. Presence+Experience: A Framework for the Purposeful Design of Presence in Online Courses

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.; Verma, Geeta; Johnson, Heather Lynn

    2016-01-01

    In this article, we share a framework for the purposeful design of presence in online courses. Instead of developing something new, we looked at two models that have helped us with previous instructional design projects, providing us with some assurance that the design decisions we were making were fundamentally sound. As we began to work with the…

  8. Design of Mobile Augmented Reality in Health Care Education: A Theory-Driven Framework

    PubMed Central

    Lilienthal, Anneliese; Shluzas, Lauren Aquino; Masiello, Italo; Zary, Nabil

    2015-01-01

    Background Augmented reality (AR) is increasingly used across a range of subject areas in health care education as health care settings partner to bridge the gap between knowledge and practice. As the first contact with patients, general practitioners (GPs) are important in the battle against a global health threat, the spread of antibiotic resistance. AR has potential as a practical tool for GPs to combine learning and practice in the rational use of antibiotics. Objective This paper was driven by learning theory to develop a mobile augmented reality education (MARE) design framework. The primary goal of the framework is to guide the development of AR educational apps. This study focuses on (1) identifying suitable learning theories for guiding the design of AR education apps, (2) integrating learning outcomes and learning theories to support health care education through AR, and (3) applying the design framework in the context of improving GPs’ rational use of antibiotics. Methods The design framework was first constructed with the conceptual framework analysis method. Data were collected from multidisciplinary publications and reference materials and were analyzed with directed content analysis to identify key concepts and their relationships. Then the design framework was applied to a health care educational challenge. Results The proposed MARE framework consists of three hierarchical layers: the foundation, function, and outcome layers. Three learning theories—situated, experiential, and transformative learning—provide foundational support based on differing views of the relationships among learning, practice, and the environment. The function layer depends upon the learners’ personal paradigms and indicates how health care learning could be achieved with MARE. The outcome layer analyzes different learning abilities, from knowledge to the practice level, to clarify learning objectives and expectations and to avoid teaching pitched at the wrong level

  9. FPGA Simulation Engine for Customized Construction of Neural Microcircuits.

    PubMed

    Blair, Hugh T; Cong, Jason; Wu, Di

    2013-04-01

    In this paper we describe an FPGA-based platform for high-performance and low-power simulation of neural microcircuits composed from integrate-and-fire (IAF) neurons. Based on high-level synthesis, our platform uses design templates to map hierarchies of neuron model to logic fabrics. This approach bypasses high design complexity and enables easy optimization and design space exploration. We demonstrate the benefits of our platform by simulating a variety of neural microcircuits that perform oscillatory path integration, which evidence suggests may be a critical building block of the navigation system inside a rodent's brain. Experiments show that our FPGA simulation engine for oscillatory neural microcircuits can achieve up to 39× speedup compared to software benchmarks on commodity CPU, and 232× energy reduction compared to embedded ARM core.

  10. A sampling design framework for monitoring secretive marshbirds

    USGS Publications Warehouse

    Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.

    2009-01-01

    A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.

  11. Single Event Testing on Complex Devices: Test Like You Fly versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2014-01-01

    We present a framework for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  12. A preliminary report of designing removable partial denture frameworks using a specifically developed software package.

    PubMed

    Han, Jing; Wang, Yong; Lü, Peijun

    2010-01-01

    This article reports on a method to digitally survey and build virtual patterns for removable partial denture (RPD) frameworks using a new three-dimensional (3D) computer-aided design/computer-assisted manufacturing (CAD/CAM) software package developed specifically for RPD design. The procedure included obtaining 3D data from partially dentate casts, deciding on the path of insertion, and modeling the shape of the components of the frameworks digitally. The completed model data were stored as stereolithography (STL) files, which are commonly used in transferring CAD/CAM models to rapid prototyping technologies. Finally, metal RPD frameworks were fabricated using a selective laser melting technique.

  13. Photoelectric radar servo control system based on ARM+FPGA

    NASA Astrophysics Data System (ADS)

    Wu, Kaixuan; Zhang, Yue; Li, Yeqiu; Dai, Qin; Yao, Jun

    2016-01-01

    In order to get smaller, faster, and more responsive requirements of the photoelectric radar servo control system. We propose a set of core ARM + FPGA architecture servo controller. Parallel processing capability of FPGA to be used for the encoder feedback data, PWM carrier modulation, A, B code decoding processing and so on; Utilizing the advantage of imaging design in ARM Embedded systems achieves high-speed implementation of the PID algorithm. After the actual experiment, the closed-loop speed of response of the system cycles up to 2000 times/s, in the case of excellent precision turntable shaft, using a PID algorithm to achieve the servo position control with the accuracy of + -1 encoder input code. Firstly, This article carry on in-depth study of the embedded servo control system hardware to determine the ARM and FPGA chip as the main chip with systems based on a pre-measured target required to achieve performance requirements, this article based on ARM chip used Samsung S3C2440 chip of ARM7 architecture , the FPGA chip is chosen xilinx's XC3S400 . ARM and FPGA communicate by using SPI bus, the advantage of using SPI bus is saving a lot of pins for easy system upgrades required thereafter. The system gets the speed datas through the photoelectric-encoder that transports the datas to the FPGA, Then the system transmits the datas through the FPGA to ARM, transforms speed datas into the corresponding position and velocity data in a timely manner, prepares the corresponding PWM wave to control motor rotation by making comparison between the position data and the velocity data setted in advance . According to the system requirements to draw the schematics of the photoelectric radar servo control system and PCB board to produce specially. Secondly, using PID algorithm to control the servo system, the datas of speed obtained from photoelectric-encoder is calculated position data and speed data via high-speed digital PID algorithm and coordinate models. Finally, a

  14. Evaluating system for SRAM-based FPGA single event upset rate

    NASA Astrophysics Data System (ADS)

    Wang, Yunlong; Bao, Bin

    2016-09-01

    This paper takes static random-access-memory (SRAM)-based field-programmable-gate-array (FPGA) as the research object. Attention is focused on the configuration memory of this kind of FPGA, and the research has been devoted to the contents of the configuration memory and the configuration circuit to manage its contents. The single event upset (SEU) happening in the configuration memory doesn't lead to a functional failure necessarily. The dynamic SEU is SEU which happens in the configuration memory and causes necessarily function failure. This paper introduces a test method of dynamic SUE rate for the SRAM-based FPGA by designing a FPGA with self-test function.

  15. Optimal Aeroacoustic Shape Design Using the Surrogate Management Framework

    DTIC Science & Technology

    2004-02-09

    wish to thank the IMA for providing a forum for collaboration, as well as Charles Audet and Petros Koumoutsakos for valuable discussions. The authors...17] N. Hansen, D. Mller, and P. Koumoutsakos . Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation...P. Koumoutsakos . Optimal aeroacoustic shape design using approximation modeling. Annual Research Briefs, Center for Turbulence Research, Stanford

  16. A Framework for Promoting Learning in IS Design and Implementation

    ERIC Educational Resources Information Center

    Small, Adrian; Sice, Petia; Venus, Tony

    2008-01-01

    Purpose: The purpose of this paper is to set out an argument for a way to design, implement and manage IS with an emphasis on first, the learning that can be created through undertaking the approach, and second, the learning that may be created through using the IS that was implemented. The paper proposes joining two areas of research namely,…

  17. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  18. Capstone Dichotomies: A Proposed Framework for Characterizing Capstone Design Experiences

    DTIC Science & Technology

    2015-03-18

    discipline has freedom in how they achieve these outcomes, so long as it is a deliberate and traceable approach back to the desired outcomes. This freedom...allows each discipline to tailor their capstone design experience tot hose appropriate to their domains. When students are developed fully within a...single discipline program that also offers their capstone, the structure promotes alignment of the student, instructor, and advisor expectations. However

  19. The Modern Design of Experiments: A Technical and Marketing Framework

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.

  20. Design and Implementation of Telemedicine based on Java Media Framework

    NASA Astrophysics Data System (ADS)

    Xiong, Fengguang; Jia, Zhiyan

    According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.

  1. A framework for designing a healthcare outcome data warehouse.

    PubMed

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-09-06

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making.

  2. Application of FPGA technology to performance limitations in radiation therapy

    NASA Astrophysics Data System (ADS)

    DeMarco, John J.; Smathers, J. B.; Solberg, Tim D.; Casselman, Steve

    1996-10-01

    The field programmable gate array (FPGA) is a promising technology for increasing computation performance by providing for the design of custom chips through programmable logic blocks. This technology was used to implement and test a hardware random number generator (RNG) versus four software algorithms. The custom hardware consists of a sun SBus-based board (EVC) which has been designed around a Xilinx FPGA. A timing analysis indicates the Sun/EVC hardware generator computes 1 multiplied by 106 random numbers approximately 50 times faster than the multiplicative congruential algorithm. The hardware and software RNGs were also compare using a Monte Carlo photon transport algorithm. For this comparison the Sun/EVC generator produces a performance increase of approximately 2.0 versus the software generators. This comparison is based upon 1 multiplied by 105 photon histories.

  3. The Skills Framework for the Information Age: Engaging Stakeholders in Curriculum Design

    ERIC Educational Resources Information Center

    von Konsky, Brian R.; Miller, Charlynn; Jones, Asheley

    2016-01-01

    This paper reports on a research project, examining the role of the Skills Framework for the Information Age (SFIA) in Information and Communications Technology (ICT) curriculum design and management. A goal was to investigate how SFIA informs a top-down approach to curriculum design, beginning with a set of skills that define a particular career…

  4. Designing Online Management Education Courses Using the Community of Inquiry Framework

    ERIC Educational Resources Information Center

    Weyant, Lee E.

    2013-01-01

    Online learning has grown as a program delivery option for many colleges and programs of business. The Community of Inquiry (CoI) framework consisting of three interrelated elements--social presence, cognitive presence, and teaching presences--provides a model to guide business faculty in their online course design. The course design of an online…

  5. The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.

    ERIC Educational Resources Information Center

    New York Association of Training and Employment Professionals, Albany.

    This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…

  6. A Framework for the Design of Computer-Assisted Simulation Training for Complex Police Situations

    ERIC Educational Resources Information Center

    Söderström, Tor; Åström, Jan; Anderson, Greg; Bowles, Ron

    2014-01-01

    Purpose: The purpose of this paper is to report progress concerning the design of a computer-assisted simulation training (CAST) platform for developing decision-making skills in police students. The overarching aim is to outline a theoretical framework for the design of CAST to facilitate police students' development of search techniques in…

  7. Serious Games for Higher Education: A Framework for Reducing Design Complexity

    ERIC Educational Resources Information Center

    Westera, W.; Nadolski, R. J.; Hummel, H. G. K.; Wopereis, I. G. J. H.

    2008-01-01

    Serious games open up many new opportunities for complex skills learning in higher education. The inherent complexity of such games, though, requires large efforts for their development. This paper presents a framework for serious game design, which aims to reduce the design complexity at conceptual, technical and practical levels. The approach…

  8. Towards a Conceptual Framework of GBL Design for Engagement and Learning of Curriculum-Based Content

    ERIC Educational Resources Information Center

    Jabbar, Azita Iliya Abdul; Felicia, Patrick

    2016-01-01

    This paper aims to show best practices of GBL design for engagement. It intends to show how teachers can implement GBL in a collaborative, comprehensive and systematic way, in the classrooms, and probably outside the classrooms, based on empirical evidence and theoretical framework designed accordingly. This paper presents the components needed to…

  9. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    ERIC Educational Resources Information Center

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  10. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-07-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a professional development programme intended to empower teachers to design context-based chemistry education. The programme consists of teaching a pre-developed context-based unit, followed by teachers designing an outline of a new context-based unit. The study investigates the process of teacher empowerment during the implementation of the programme. Data were obtained from meetings, classroom discussions and observations. The findings indicated that teachers became empowered to design new context-based units provided they had sufficient time and resources. The contribution of the framework to teacher empowerment is discussed.

  11. RIPOSTE: a framework for improving the design and analysis of laboratory-based research.

    PubMed

    Masca, Nicholas Gd; Hensor, Elizabeth Ma; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam Ka; Teare, M Dawn

    2015-05-07

    Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results.

  12. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    PubMed Central

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; LeBeau, James M.; Rajan, Krishna

    2015-01-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives. PMID:26681142

  13. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design.

    PubMed

    Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna

    2015-12-18

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  14. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    NASA Astrophysics Data System (ADS)

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; Lebeau, James M.; Rajan, Krishna

    2015-12-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  15. A design framework for teleoperators with kinesthetic feedback

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake

    1989-01-01

    The application of a hybrid two-port model to teleoperators with force and velocity sensing at the master and slave is presented. The interfaces between human operator and master, and between environment and slave, are ports through which the teleoperator is designed to exchange energy between the operator and the environment. By computing or measuring the input-output properties of this two-port network, the hybrid two-port model of an actual or simulated teleoperator system can be obtained. It is shown that the hybrid model (as opposed to other two-port forms) leads to an intuitive representation of ideal teleoperator performace and applies to several teleoperator architectures. Thus measured values of the h matrix or values computed from a simulation can be used to compare performance with th ideal. The frequency-dependent h matrix is computed from a detailed SPICE model of an actual system, and the method is applied to a proposed architecture.

  16. Wire Position Monitoring with FPGA based Electronics

    SciTech Connect

    Eddy, N.; Lysenko, O.; /Fermilab

    2009-01-01

    This fall the first Tesla-style cryomodule cooldown test is being performed at Fermilab. Instrumentation department is preparing the electronics to handle the data from a set of wire position monitors (WPMs). For simulation purposes a prototype pipe with a WMP has been developed and built. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The wire is stretched along the pipe with a tensioning load of 9.07 kg. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. FPGA based digitizer scans the WPM and transmits the data to a PC via VME interface. The data acquisition is based on the PC running LabView. In order to increase the accuracy and convenience of the measurements some modifications were required. The first is implementation of an average and decimation filter algorithm in the integrator operation in the FPGA. The second is the development of alternative tool for WPM measurements in the PC. The paper describes how these modifications were performed and test results of a new design. The last cryomodule generation has a single chain of seven WPMs (placed in critical positions: at each end, at the three posts and between the posts) to monitor a cold mass displacement during cooldown. The system was developed in Italy in collaboration with DESY. Similar developments have taken place at Fermilab in the frame of cryomodules construction for SCRF research. This fall preliminary cryomodule cooldown test is being performed. In order to prepare an appropriate electronic system for the test a prototype pipe with a WMP has been developed and built, figure 1. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The 0.5 mm diameter Cu wire is stretched along the pipe with a tensioning load of 9.07 kg and has a length of 1.1 m. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. An FPGA based

  17. A framework for the design, implementation, and evaluation of interprofessional education.

    PubMed

    Pardue, Karen T

    2015-01-01

    The growing emphasis on teamwork and care coordination within health care delivery is sparking interest in interprofessional education (IPE) among nursing and health profession faculty. Faculty often lack firsthand IPE experience, which hinders pedagogical reform. This article proposes a theoretically grounded framework for the design, implementation, and evaluation of IPE. Supporting literature and practical advice are interwoven. The proposed framework guides faculty in the successful creation and evaluation of collaborative learning experiences.

  18. Design theoretic analysis of three system modeling frameworks.

    SciTech Connect

    McDonald, Michael James

    2007-05-01

    This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.

  19. Designing smart analytical data services for a personal health framework.

    PubMed

    Koumakis, Lefteris; Kondylakis, Haridimos; Chatzimina, Maria; Iatraki, Galatia; Argyropaidas, Panagiotis; Kazantzaki, Eleni; Tsiknakis, Manolis; Kiefer, Stephan; Marias, Kostas

    2016-01-01

    Information in the healthcare domain and in particular personal health record information is heterogeneous by nature. Clinical, lifestyle, environmental data and personal preferences are stored and managed within such platforms. As a result, significant information from such diverse data is difficult to be delivered, especially to non-IT users like patients, physicians or managers. Another issue related to the management and analysis is the volume, which increases more and more making the need for efficient data visualization and analysis methods mandatory. The objective of this work is to present the architectural design for seamless integration and intelligent analysis of distributed and heterogeneous clinical information in the PHR context, as a result of a requirements elicitation process in iManageCancer project. This systemic approach aims to assist health-care professionals to orient themselves in the disperse information space and enhance their decision-making capabilities, to encourage patients to have an active role by managing their health information and interacting with health-care professionals.

  20. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view.

  1. FPGA-core defibrillator using wavelet-fuzzy ECG arrhythmia classification.

    PubMed

    Nambakhsh, Mohammad; Tavakoli, Vahid; Sahba, Nima

    2008-01-01

    An electrocardiogram (ECG) feature extraction and classification system has been developed and evaluated using Quartus II 7.1 belong to Altera Ltd. In wavelet domain QRS complexes were detected and each complex was used to locate the peaks of the individual waves. Then, fuzzy classifier block used these features to classify ECG beats. Three types of arrhythmias and abnormalities were detected using the procedure. The completed algorithm was embedded into Field Programmable Gate Array (FPGA). The completed prototype was tested through software-generated signals, in which test scenarios covering several kinds of ECG signals on MIT-BIH Database. For the purpose of feeding signals into the FPGA, a software was designed to read signal files and import them to the LPT port of computer that was connected to FPGA. From the results, it was achieved that the proposed prototype could do real time monitoring of ECG signal for arrhythmia detection. We also implemented algorithm in a sequential structure device like AVR microcontroller with 16 MHZ clock for the same purpose. External clock of FPGA is 50 MHZ and by utilizing of Phase Lock Loop (PLL) component inside device, it was possible to increase the clock up to 1.2 GHZ in internal blocks. Final results compare speed and cost of resource usage in both devices. It shows that in cost of more resource usage, FPGA provides higher speed of computation; because FPGA makes the algorithm able to compute most parts in parallel manner.

  2. a Novel Framework for Incorporating Sustainability Into Biomass Feedstock Design

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, G.; Negri, C.

    2012-12-01

    There is a strong society need to evaluate and understand the sustainability of biofuels, especially due to the significant increases in production mandated by many countries, including the United States. Biomass feedstock production is an important contributor to environmental, social and economic impacts from biofuels. We present a systems approach where the agricultural, urban, energy and environmental sectors are considered as components of a single system and environmental liabilities are used as recoverable resources for biomass feedstock production. A geospatial analysis evaluating marginal land and degraded water resources to improve feedstock productivity with concomitant environmental restoration was conducted for the major corn producing states in the US. The extent and availability of these resources was assessed and geospatial techniques used to identify promising opportunities to implement this approach. Utilizing different sources of marginal land (roadway buffers, contaminated land) could result in a 7-fold increase in land availability for feedstock production and provide ecosystem services such as water quality improvement and carbon sequestration. Spatial overlap between degraded water and marginal land resources was found to be as high as 98% and could maintain sustainable feedstock production on marginal lands through the supply of water and nutrients. Multi-objective optimization was used to quantify the tradeoffs between net revenue, improvements in water quality and carbon sequestration at the farm scale using this design. Results indicated that there is an initial opportunity where land that is marginally productive for row crops and of marginal value for conservation purposes could be used to grow bioenergy crops such that that water quality and carbon sequestration benefits are obtained.

  3. Three-dimensional finite element analysis of zirconia all-ceramic cantilevered fixed partial dentures with different framework designs.

    PubMed

    Miura, Shoko; Kasahara, Shin; Yamauchi, Shinobu; Egusa, Hiroshi

    2017-03-17

    The purpose of this study were: to perform stress analyses using three-dimensional finite element analysis methods; to analyze the mechanical stress of different framework designs; and to investigate framework designs that will provide for the long-term stability of both cantilevered fixed partial dentures (FPDs) and abutment teeth. An analysis model was prepared for three units of cantilevered FPDs that assume a missing mandibular first molar. Four types of framework design (Design 1, basic type; Design 2, framework width expanded buccolingually by 2 mm; Design 3, framework height expanded by 0.5 mm to the occlusal surface side from the end abutment to the connector area; and Design 4, a combination of Designs 2 and 3) were created. Two types of framework material (yttrium-oxide partially stabilized zirconia and a high precious noble metal gold alloy) and two types of abutment material (dentin and brass) were used. In the framework designs, Design 1 exhibited the highest maximum principal stress value for both zirconia and gold alloy. In the abutment tooth, Design 3 exhibited the highest maximum principal stress value for all abutment teeth. In the present study, Design 4 (the design with expanded framework height and framework width) could contribute to preventing the concentration of stress and protecting abutment teeth.

  4. A framework design for the mHealth system for self-management promotion.

    PubMed

    Jia, Guifeng; Yang, Pan; Zhou, Jie; Zhang, Hengyi; Lin, Chengyu; Chen, Jin; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    Mobile health (mHealth) technology has been proposed to alleviate the lack of sufficient medical resources for personal healthcare. However, usage difficulties and compliance issues relating to this technology restrict the effect of mHealth system-supported self-management. In this study, an mHealth framework is introduced to overcome these drawbacks and improve the outcome of self-management. We implemented a set of ease of use principles in the mHealth design and employed the quantitative Fogg Behavior Model to enhance users' execution ability. The framework was realized in a prototype design for the mHealth system, which consists of medical apparatuses, mobile applications and a health management server. The system is able to monitor the physiological status in an unconstrained manner with simplified operations, while supervising the healthcare plan. The results suggest that the present framework design is accessible for ordinary users and effective in improving users' execution ability in self-management.

  5. A framework for analyzing interdisciplinary tasks: implications for student learning and curricular design.

    PubMed

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D; Turpen, Chandra

    2013-06-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives.

  6. A framework for landscape ecological design of new patches in the rural landscape.

    PubMed

    Lafortezza, R; Brown, R D

    2004-10-01

    This study developed a comprehensive framework to incorporate landscape ecological principles into the landscape planning and design process, with a focus on the design of new patches in the rural landscape. The framework includes two interrelated phases: patch analyst (PA) and patch designer (PD). The patch analyst augments the process of landscape inventory and analysis. It distinguishes nodes (associated with potential habitat patches) from links (associated with corridors and stepping stones between habitats). For natural vegetation patches, characteristics such as size, shape, and spatial arrangement have been used to develop analytical tools that distinguish between nodes and links. The patch designer uses quantitative information and analytical tools to recommend locations, shapes, sizes, and composition of introduced patches. The framework has been applied to the development of a new golf course in the rural Mediterranean landscape of Apulia, Southern Italy. Fifty new patches of Mediterranean maquis (24 patches) and garrigue (26 patches) have been designed and located in the golf course, raising the overall natural vegetation area to 70 ha (60% of total property). The framework has potential for use in a wide variety of landscape planning, design, and management projects.

  7. Effect of framework design on fracture resistance of zirconium oxide posterior fixed partial dentures

    PubMed Central

    Salimi, Hadi; Mosharraf, Ramin; Savabi, Omid

    2012-01-01

    Introduction: The effect of framework design modifications in all-ceramic systems is not fully understood. The aim of this investigation was to evaluate the effect of different framework designs on fracture resistance of zirconium oxide posterior fixed partial dentures (FPD). Materials and Methods: Thirty two posterior zirconia FPD cores were manufactured to replace a second premolar. The specimens were divided into four groups; I: 3 × 3 connector and standard design, II: 3 × 3 connector and modified design, III: 4 × 4 connector dimension, and standard design and IV: 4 × 4 connector dimension and modified design. After storing for one week in artificial saliva and thermocycling (2000 cycles, 5-55°C), the specimens were loaded in a universal testing machine at a constant cross-head speed of 0.5 mm/min until failure occurred. The Weibull, Kruskal-Wallis, and Mann-Whitney tests were used for statistical analysis (α = 0.05). Results: The mean fracture resistance of groups with 4 × 4 mm connector was significantly higher than groups with 3 × 3 mm connector (P < 0.001). Although, the fracture resistance of the modified frameworks was increased in the present study (1.1 times), they were not significantly different from anatomic specimens (P = 0.327). Conclusions: The fracture resistance of the zirconia posterior-fixed partial dentures was significantly affected by the connector size; it was not affected by the framework modification. PMID:23559956

  8. An FPGA Implementation to Detect Selective Cationic Antibacterial Peptides

    PubMed Central

    Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel

    2011-01-01

    Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides. PMID:21738652

  9. An FPGA implementation to detect selective cationic antibacterial peptides.

    PubMed

    Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel

    2011-01-01

    Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides.

  10. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  11. FPGA implementation of hardware processing modules as coprocessors in brain-machine interfaces.

    PubMed

    Wang, Dong; Hao, Yaoyao; Zhu, Xiaoping; Zhao, Ting; Wang, Yiwen; Chen, Yaowu; Chen, Weidong; Zheng, Xiaoxiang

    2011-01-01

    Real-time computation, portability and flexibility are crucial for practical brain-machine interface (BMI) applications. In this work, we proposed Hardware Processing Modules (HPMs) as a method for accelerating BMI computation. Two HPMs have been developed. One is the field-programmable gate array (FPGA) implementation of spike sorting based on probabilistic neural network (PNN), and the other is the FPGA implementation of neural ensemble decoding based on Kalman filter (KF). These two modules were configured under the same framework and tested with real data from motor cortex recording in rats performing a lever-pressing task for water rewards. Due to the parallelism feature of FPGA, the computation time was reduced by several dozen times, while the results are almost the same as those from Matlab implementations. Such HPMs provide a high performance coprocessor for neural signal computation.

  12. A framework for development of an intelligent system for design and manufacturing of stamping dies

    NASA Astrophysics Data System (ADS)

    Hussein, H. M. A.; Kumar, S.

    2014-07-01

    An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.

  13. A framework-based approach to designing simulation-augmented surgical education and training programs.

    PubMed

    Cristancho, Sayra M; Moussa, Fuad; Dubrowski, Adam

    2011-09-01

    The goal of simulation-based medical education and training is to help trainees acquire and refine the technical and cognitive skills necessary to perform clinical procedures. When designers incorporate simulation into programs, their efforts should be in line with training needs, rather than technology. Designers of simulation-augmented surgical training programs, however, face particular problems related to identifying a framework that guides the curricular design activity to fulfill the particular requirements of such training programs. These problems include the lack of (1) an objective identification of training needs, (2) a systematic design methodology to match training objectives with simulation resources, (3) structured assessments of performance, and (4) a research-centered view to evaluate and validate systematically the educational effectiveness of the program. In this report, we present a process called "Aim - FineTune - FollowThrough" to enable the connection of the identified problems to solutions, using frameworks from psychology, motor learning, education and experimental design.

  14. Printed Circuit Board Design (PCB) with HDL Designer

    NASA Technical Reports Server (NTRS)

    Winkert, Thomas K.; LaFourcade, Teresa

    2004-01-01

    Contents include the following: PCB design with HDL designer, design process and schematic capture - symbols and diagrams: 1. Motivation: time savings, money savings, simplicity. 2. Approach: use single tool PCB for FPGA design, more FPGA designs than PCB designers. 3. Use HDL designer for schematic capture.

  15. Real-time FPGA design for the L0-trigger of the RICH detector of the NA62 experiment at CERN SPS

    NASA Astrophysics Data System (ADS)

    Barbanera, M.; Gonnella, F.

    2017-01-01

    The NA62 experiment aims at measuring rare kaon decays, in order to precisely test the standard model. The RICH (Ring Imaging CHerenkov) detector of the experiment is instrumental in charged-particle identification and in measurement of their crossing time, with a resolution better than 100 ps. Here we describe the design of the Level-0 trigger system for the RICH, which provides a precise time reference by counting the input hit multiplicity within programmable fine-time windows. Since the design does not use spatial information and stands the maximum input rate of TDC-based NA62 systems, it can be deployed also in other subdetectors.

  16. Support for development of a custom VLSI and FPGA logic chips based on a VHDL top-down design approach. Final report

    SciTech Connect

    Not Available

    1994-06-01

    The objective of this contract was to perform the beginning stages of development for two Application Specific integrated Circuits: CMOS-1 and CMOS-2D. This work includes specification writing, behavioral modeling, and beginning design. In addition, the design work is required to be done in the VHSIC Hardware Description Language (VHDL). InnovASIC, Inc. completed all the tasks required of this contract. The specifications were written, VHDL for CMOS-1 was completed, a behavioral model of CMOS-2D was written, and a system simulation was performed.

  17. Economical Implementation of a Filter Engine in an FPGA

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    2009-01-01

    A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be

  18. A Usability and Accessibility Design and Evaluation Framework for ICT Services

    NASA Astrophysics Data System (ADS)

    Subasi, Özge; Leitner, Michael; Tscheligi, Manfred

    This paper introduces a step by step framework for practitioners for combining accessibility and usability engineering processes. Following the discussions towards the needs of more user centeredness in the design of accessible solutions, there is a need for such a practical framework. In general, accessibility has been considered as a topic dealing with "hard facts". But lately terms like semantic and procedural accessibility have been introduced. In the following pages we propose a first sketch of a framework, which shows how to merge both usability and accessibility evaluation methods in the same process in order to guarantee a unified solution for both hard and soft facts of accessibility. We argue that by enhancing the user centered design process as the ISO DIS 9241-210 (revised DIN ISO 13407) describes it, accessibility and usability issues may be covered in one process.

  19. A conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2014-01-01

    This paper proposes a conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture (CDA) standard. The adoption of this framework can represent a possible solution to facilitate the integration of heterogeneous information systems in a clinical data warehouse. This can simplify the Extract, Transform and Load (ETL) procedures that are considered the most time-consuming and expensive part of the data warehouse development process. The paper describes the main activities to be carried out to design the dimensional model outlining the main advantages in the application of the proposed framework. The feasibility of our approach is also demonstrated providing a case study to define clinical indicators for quality assessment.

  20. FPGA-based architecture for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Rosário, João.; Nascimento, José M. P.; Véstias, Mário

    2014-10-01

    Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember's signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.

  1. Neural harmonic detection approaches for FPGA area efficient implementation

    NASA Astrophysics Data System (ADS)

    Dzondé, S. R. N.; Kom, C.-H.; Berviller, H.; Blondé, J.-P.; Flieller, D.; Kom, M.; Braun, F.

    2011-12-01

    This paper deals with new neural networks based harmonics detection approaches to minimize hardware resources needed for FPGA implementation. A simple type of neural network called Adaline is used to build an intelligent Active Power Filter control unit for harmonics current elimination and reactive power compensation. For this purpose, two different approaches called Improved Three-Monophase (ITM) and Two-Phase Flow (TPF) methods are proposed. The ITM method corresponds to a simplified structure of the three-monophase method whereas the TPF method derives from the Synchronous Reference Frame method. Indeed, for both proposed methods, only 50% of Adalines with regard to the original methods is used. The corresponding designs were implemented on a FPGA Stratix II platform through Altera DSP Builder® development tool. After analyzing those two methods with respect to performance and size criteria, a comparative study with the popular p-q and also the direct method is reported. From there, one can notice that the p-q is still the most powerful method for three-phase compensation but the TPF method is the fastest and the most compact in terms of size. An experimental result is shown to validate the feasibility of FPGA implementation of ANN-based harmonics extraction algorithms.

  2. Designing Energy Supply Chains with the P-graph Framework under Cost Constraints and Sustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  3. Designing Multi-Channel Web Frameworks for Cultural Tourism Applications: The MUSE Case Study.

    ERIC Educational Resources Information Center

    Garzotto, Franca; Salmon, Tullio; Pigozzi, Massimiliano

    A framework for the design of multi-channel (MC) applications in the cultural tourism domain is presented. Several heterogeneous interface devices are supported including location-sensitive mobile units, on-site stationary devices, and personalized CDs that extend the on-site experience beyond the visit time thanks to personal memories gathered…

  4. Universal Instructional Design: A New Framework for Accommodating Students in Social Work Courses

    ERIC Educational Resources Information Center

    Lightfoot, Elizabeth; Gibson, Priscilla

    2005-01-01

    This article provides an analysis of the current method of accommodating students with disabilities in social work education and presents a new framework for providing universal access to all students in social work education: Universal Instructional Design (UID). UID goes beyond adapting already developed social work curricula to fit the needs of…

  5. Designing and Implementing an Integrated Technological Pedagogical Science Knowledge Framework for Science Teachers Professional Development

    ERIC Educational Resources Information Center

    Jimoyiannis, Athanassios

    2010-01-01

    This paper reports on the design and the implementation of the Technological Pedagogical Science Knowledge (TPASK), a new model for science teachers professional development built on an integrated framework determined by the Technological Pedagogical Content Knowledge (TPACK) model and the authentic learning approach. The TPASK curriculum…

  6. A Framework for Designing a Research-Based "Maths Counsellor" Teacher Programme

    ERIC Educational Resources Information Center

    Jankvist, Uffe Thomas; Niss, Mogens

    2015-01-01

    This article addresses one way in which decades of mathematics education research results can inform practice, by offering a framework for designing and implementing an in-service teacher education programme for upper secondary mathematics teachers in Denmark. The programme aims to educate a "task force" of so-called "maths…

  7. The Role of a Reusable Assessment Framework in Designing Computer-Based Learning Environments.

    ERIC Educational Resources Information Center

    Park, Young; Bauer, Malcolm

    This paper introduces the concept of a reusable assessment framework (RAF). An RAF contains a library of linked assessment design objects that express: (1) specific set of proficiencies (i.e. the knowledge, skills, and abilities of students for a given content or skill area); (2) the types of evidence that can be used to estimate those…

  8. A Design Based Research Framework for Implementing a Transnational Mobile and Blended Learning Solution

    ERIC Educational Resources Information Center

    Palalas, Agnieszka; Berezin, Nicole; Gunawardena, Charlotte; Kramer, Gretchen

    2015-01-01

    The article proposes a modified Design-Based Research (DBR) framework which accommodates the various socio-cultural factors that emerged in the longitudinal PA-HELP research study at Central University College (CUC) in Ghana, Africa. A transnational team of stakeholders from Ghana, Canada, and the USA collaborated on the development,…

  9. A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.

    ERIC Educational Resources Information Center

    Sproule, Susan; Archer, Norm

    2000-01-01

    Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…

  10. Designing a Virtual Olympic Games Framework by Using Simulation in Web 2.0 Technologies

    ERIC Educational Resources Information Center

    Stoilescu, Dorian

    2013-01-01

    Instructional simulation had major difficulties in the past for offering limited possibilities in practice and learning. This article proposes a link between instructional simulation and Web 2.0 technologies. More exactly, I present the design of the Virtual Olympic Games Framework (VOGF), as a significant demonstration of how interactivity in…

  11. An Instructional Design Framework to Improve Student Learning in a First-Year Engineering Class

    ERIC Educational Resources Information Center

    Yelamarthi, Kumar; Drake, Eron; Prewett, Matthew

    2016-01-01

    Increasingly, numerous universities have identified benefits of flipped learning environments and have been encouraging instructors to adapt such methodologies in their respective classrooms, at a time when departments are facing significant budget constraints. This article proposes an instructional design framework utilized to strategically…

  12. Developing a Framework for Social Technologies in Learning via Design-Based Research

    ERIC Educational Resources Information Center

    Parmaxi, Antigoni; Zaphiris, Panayiotis

    2015-01-01

    This paper reports on the use of design-based research (DBR) for the development of a framework that grounds the use of social technologies in learning. The paper focuses on three studies which step on the learning theory of constructionism. Constructionism assumes that knowledge is better gained when students find this knowledge for themselves…

  13. A KBE-enabled design framework for cost/weight optimization study of aircraft composite structures

    NASA Astrophysics Data System (ADS)

    Wang, H.; La Rocca, G.; van Tooren, M. J. L.

    2014-10-01

    Traditionally, minimum weight is the objective when optimizing airframe structures. This optimization, however, does not consider the manufacturing cost which actually determines the profit of the airframe manufacturer. To this purpose, a design framework has been developed able to perform cost/weight multi-objective optimization of an aircraft component, including large topology variations of the structural configuration. The key element of the proposed framework is a dedicated knowledge based engineering (KBE) application, called multi-model generator, which enables modelling very different product configurations and variants and extract all data required to feed the weight and cost estimation modules, in a fully automated fashion. The weight estimation method developed in this research work uses Finite Element Analysis to calculate the internal stresses of the structural elements and an analytical composite plate sizing method to determine their minimum required thicknesses. The manufacturing cost estimation module was developed on the basis of a cost model available in literature. The capability of the framework was successfully demonstrated by designing and optimizing the composite structure of a business jet rudder. The study case indicates the design framework is able to find the Pareto optimal set for minimum structural weight and manufacturing costin a very quick way. Based on the Pareto set, the rudder manufacturer is in conditions to conduct both internal trade-off studies between minimum weight and minimum cost solutions, as well as to offer the OEM a full set of optimized options to choose, rather than one feasible design.

  14. An Ontology-Based Framework for Bridging Learning Design and Learning Content

    ERIC Educational Resources Information Center

    Knight, Colin, Gasevic, Dragan; Richards, Griff

    2006-01-01

    The paper describes an ontology-based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly…

  15. An Initial Framework of Contexts for Designing Usable Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Patel, Ashok; Russell, David; Kinshuk; Oppermann, Reinhard; Rashev, Rossen

    1998-01-01

    Discussion of context focuses on the various contexts surrounding the design and use of intelligent tutoring systems and proposes an initial framework of contexts by classifying them into three major groupings: interactional; environmental, including classifications of knowledge and social environment; and objectival contexts. (Author/LRW)

  16. Computer Mediated Communication in the Universal Design for Learning Framework for Preparation of Special Education Teachers

    ERIC Educational Resources Information Center

    Basham, James D.; Lowrey, K. Alisa; deNoyelles, Aimee

    2010-01-01

    This study investigated the Universal Design for Learning (UDL) framework as a basis for a bi-university computer mediated communication (CMC) collaborative project. Participants in the research included 78 students from two special education programs enrolled in teacher education courses. The focus of the investigation was on exploring the…

  17. Design, Implementation and Validation of a Europe-Wide Pedagogical Framework for E-Learning

    ERIC Educational Resources Information Center

    Granic, Andrina; Mifsud, Charles; Cukusic, Maja

    2009-01-01

    Within the context of a Europe-wide project UNITE, a number of European partners set out to design, implement and validate a pedagogical framework (PF) for e- and m-Learning in secondary schools. The process of formulating and testing the PF was an evolutionary one that reflected the experiences and skills of the various European partners and…

  18. A Framework for Analyzing Interdisciplinary Tasks: Implications for Student Learning and Curricular Design

    ERIC Educational Resources Information Center

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra

    2013-01-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary…

  19. The American Stop Smoking Intervention Study: Conceptual Framework and Evaluation Design.

    ERIC Educational Resources Information Center

    Stillman, Frances; Hartman, Anne; Graubard, Barry; Gilpin, Elizabeth; Chavis, David; Garcia, John; Wun, Lap-Ming; Lynn, William; Manley, Marc

    1999-01-01

    Describes the conceptual design, research framework, evaluation components, and analytic strategies that are guiding the evaluation of a demonstration-research effort, the American Stop Smoking Intervention Study (ASSIST). The ASSIST evaluation is a unique analysis of the relationships among social context, public-health activity, tobacco use, and…

  20. Using the DSAP Framework to Guide Instructional Design and Technology Integration in BYOD Classrooms

    ERIC Educational Resources Information Center

    Wasko, Christopher W.

    2016-01-01

    The purpose of this study was to determine the suitability of the DSAP Framework to guide instructional design and technology integration for teachers piloting a BYOD (Bring Your Own Device) initiative and to measure the impact the initiative had on the amount and type of technology used in pilot classrooms. Quantitative and qualitative data were…

  1. Using the Universal Design for Learning Framework to Support Culturally Diverse Learners

    ERIC Educational Resources Information Center

    Chita-Tegmark, Meia; Gravel, Jenna W.; Serpa, Maria de Lourdes B.; Domings, Yvonne; Rose, David H.

    2012-01-01

    This article describes the mechanism through which cultural variability is a source of learning differences. The authors argue that the Universal Design for Learning can be extended to capture the way learning is influenced by cultural variability, and show how the UDL framework might be used to create a curriculum that is responsive to this…

  2. Beyond a Definition: Toward a Framework for Designing and Specifying Mentoring Models

    ERIC Educational Resources Information Center

    Dawson, Phillip

    2014-01-01

    More than three decades of mentoring research has yet to converge on a unifying definition of mentoring; this is unsurprising given the diversity of relationships classified as mentoring. This article advances beyond a definition toward a common framework for specifying mentoring models. Sixteen design elements were identified from the literature…

  3. Prospective Secondary Teachers Repositioning by Designing, Implementing and Testing Mathematics Learning Objects: A Conceptual Framework

    ERIC Educational Resources Information Center

    Mgombelo, Joyce R.; Buteau, Chantal

    2009-01-01

    This article describes a conceptual framework developed to illuminate how prospective teachers' learning experiences are shaped by didactic-sensitive activities in departments of mathematics. We draw from the experiences of prospective teachers in the Department of Mathematics at our institution in designing, implementing (i.e. computer…

  4. A vaccine study design selection framework for the postlicensure rapid immunization safety monitoring program.

    PubMed

    Baker, Meghan A; Lieu, Tracy A; Li, Lingling; Hua, Wei; Qiang, Yandong; Kawai, Alison Tse; Fireman, Bruce H; Martin, David B; Nguyen, Michael D

    2015-04-15

    The Postlicensure Rapid Immunization Safety Monitoring Program, the vaccination safety monitoring component of the US Food and Drug Administration's Mini-Sentinel project, is currently the largest cohort in the US general population for vaccine safety surveillance. We developed a study design selection framework to provide a roadmap and description of methods that may be utilized to evaluate potential associations between vaccines and health outcomes of interest in the Postlicensure Rapid Immunization Safety Monitoring Program and other systems using administrative data. The strengths and weaknesses of designs for vaccine safety monitoring, including the cohort design, the case-centered design, the risk interval design, the case-control design, the self-controlled risk interval design, the self-controlled case series method, and the case-crossover design, are described and summarized in tabular form. A structured decision table is provided to aid in planning of future vaccine safety monitoring activities, and the data components comprising the structured decision table are delineated. The study design selection framework provides a starting point for planning vaccine safety evaluations using claims-based data sources.

  5. Developing a framework for qualitative engineering: Research in design and analysis of complex structural systems

    NASA Technical Reports Server (NTRS)

    Franck, Bruno M.

    1990-01-01

    The research is focused on automating the evaluation of complex structural systems, whether for the design of a new system or the analysis of an existing one, by developing new structural analysis techniques based on qualitative reasoning. The problem is to identify and better understand: (1) the requirements for the automation of design, and (2) the qualitative reasoning associated with the conceptual development of a complex system. The long-term objective is to develop an integrated design-risk assessment environment for the evaluation of complex structural systems. The scope of this short presentation is to describe the design and cognition components of the research. Design has received special attention in cognitive science because it is now identified as a problem solving activity that is different from other information processing tasks (1). Before an attempt can be made to automate design, a thorough understanding of the underlying design theory and methodology is needed, since the design process is, in many cases, multi-disciplinary, complex in size and motivation, and uses various reasoning processes involving different kinds of knowledge in ways which vary from one context to another. The objective is to unify all the various types of knowledge under one framework of cognition. This presentation focuses on the cognitive science framework that we are using to represent the knowledge aspects associated with the human mind's abstraction abilities and how we apply it to the engineering knowledge and engineering reasoning in design.

  6. Alternative Model-Based and Design-Based Frameworks for Inference from Samples to Populations: From Polarization to Integration

    ERIC Educational Resources Information Center

    Sterba, Sonya K.

    2009-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs.…

  7. Crisis crowdsourcing framework: designing strategic configurations of crowdsourcing for the emergency management domain

    USGS Publications Warehouse

    Liu, Sophia B.

    2014-01-01

    Crowdsourcing is not a new practice but it is a concept that has gained significant attention during recent disasters. Drawing from previous work in the crisis informatics, disaster sociology, and computer-supported cooperative work (CSCW) literature, the paper first explains recent conceptualizations of crowdsourcing and how crowdsourcing is a way of leveraging disaster convergence. The CSCW concept of “articulation work” is introduced as an interpretive frame for extracting the salient dimensions of “crisis crowdsourcing.” Then, a series of vignettes are presented to illustrate the evolution of crisis crowdsourcing that spontaneously emerged after the 2010 Haiti earthquake and evolved to more established forms of public engagement during crises. The best practices extracted from the vignettes clarified the efforts to formalize crisis crowdsourcing through the development of innovative interfaces designed to support the articulation work needed to facilitate spontaneous volunteer efforts. Extracting these best practices led to the development of a conceptual framework that unpacks the key dimensions of crisis crowdsourcing. The Crisis Crowdsourcing Framework is a systematic, problem-driven approach to determining the why, who, what, when, where, and how aspects of a crowdsourcing system. The framework also draws attention to the social, technological, organizational, and policy (STOP) interfaces that need to be designed to manage the articulation work involved with reducing the complexity of coordinating across these key dimensions. An example of how to apply the framework to design a crowdsourcing system is offered with with a discussion on the implications for applying this framework as well as the limitations of this framework. Innovation is occurring at the social, technological, organizational, and policy interfaces enabling crowdsourcing to be operationalized and integrated into official products and services.

  8. Rad-Hard/HI-REL FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian E.; McGowan, John E.; Katz, Richard B.

    1997-01-01

    The goals for a radiation hardened (RAD-HARD) and high reliability (HI-REL) field programmable gate array (FPGA) are described. The first qualified manufacturer list (QML) radiation hardened RH1280 and RH1020 were developed. The total radiation dose and single event effects observed on the antifuse FPGA RH1280 are reported on. Tradeoffs and the limitations in the single event upset hardening are discussed.

  9. Zeolite-like metal–organic frameworks (ZMOFs): Design, synthesis, and properties

    SciTech Connect

    Eddaoudi, Mohamed; Sava, Dorina F.; Eubank, Jarrod F.; Adil, Karim; Guillerm, Vincent

    2015-10-24

    This study highlights various design and synthesis approaches toward the construction of ZMOFs, which are metal–organic frameworks (MOFs) with topologies and, in some cases, features akin to traditional inorganic zeolites. The interest in this unique subset of MOFs is correlated with their exceptional characteristics arising from the periodic pore systems and distinctive cage-like cavities, in conjunction with modular intra- and/or extra-framework components, which ultimately allow for tailoring of the pore size, pore shape, and properties towards specific applications.

  10. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2011-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  11. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2012-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  12. A Multi-Alphabet Arithmetic Coding Hardware Implementation for Small FPGA Devices

    NASA Astrophysics Data System (ADS)

    Biasizzo, Anton; Novak, Franc; Korošec, Peter

    2013-01-01

    Arithmetic coding is a lossless compression algorithm with variable-length source coding. It is more flexible and efficient than the well-known Huffman coding. In this paper we present a non-adaptive FPGA implementation of a multi-alphabet arithmetic coding with separated statistical model of the data source. The alphabet of the data source is a 256-symbol ASCII character set and does not include the special end-of-file symbol. No context switching is used in the proposed design which gives maximal throughput without pipelining. We have synthesized the design for Xilinx FPGA devices and used their built-in hardware resources.

  13. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Tsai; Wang, Chun-Hao; Huang, Ing-Jer; Wong, Weng-Fai

    2011-12-01

    Advances in technology are making it possible to run three-dimensional (3D) graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API), device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC) accelerator using transaction-level modeling (TLM). This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  14. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  15. Tethered Forth system for FPGA applications

    NASA Astrophysics Data System (ADS)

    Goździkowski, Paweł; Zabołotny, Wojciech M.

    2013-10-01

    This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.

  16. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  17. How do you design randomised trials for smaller populations? A framework.

    PubMed

    Parmar, Mahesh K B; Sydes, Matthew R; Morris, Tim P

    2016-11-25

    How should we approach trial design when we can get some, but not all, of the way to the numbers required for a randomised phase III trial?We present an ordered framework for designing randomised trials to address the problem when the ideal sample size is considered larger than the number of participants that can be recruited in a reasonable time frame. Staying with the frequentist approach that is well accepted and understood in large trials, we propose a framework that includes small alterations to the design parameters. These aim to increase the numbers achievable and also potentially reduce the sample size target. The first step should always be to attempt to extend collaborations, consider broadening eligibility criteria and increase the accrual time or follow-up time. The second set of ordered considerations are the choice of research arm, outcome measures, power and target effect. If the revised design is still not feasible, in the third step we propose moving from two- to one-sided significance tests, changing the type I error rate, using covariate information at the design stage, re-randomising patients and borrowing external information.We discuss the benefits of some of these possible changes and warn against others. We illustrate, with a worked example based on the Euramos-1 trial, the application of this framework in designing a trial that is feasible, while still providing a good evidence base to evaluate a research treatment.This framework would allow appropriate evaluation of treatments when large-scale phase III trials are not possible, but where the need for high-quality randomised data is as pressing as it is for common diseases.

  18. Radiometric Calibration of Mars HiRISE High Resolution Imagery Based on Fpga

    NASA Astrophysics Data System (ADS)

    Hou, Yifan; Geng, Xun; Xing, Shuai; Tang, Yonghe; Xu, Qing

    2016-06-01

    Due to the large data amount of HiRISE imagery, traditional radiometric calibration method is not able to meet the fast processing requirements. To solve this problem, a radiometric calibration system of HiRISE imagery based on field program gate array (FPGA) is designed. The montage gap between two channels caused by gray inconsistency is removed through histogram matching. The calibration system is composed of FPGA and DSP, which makes full use of the parallel processing ability of FPGA and fast computation as well as flexible control characteristic of DSP. Experimental results show that the designed system consumes less hardware resources and the real-time processing ability of radiometric calibration of HiRISE imagery is improved.

  19. Improved Approach for Utilization of FPGA Technology into DAQ, DSP, and Computing Applications

    SciTech Connect

    Isenhower, Larry Donald

    2009-01-28

    Innovation Partners proposed and successfully demonstrated in this SBIR Phase I grant a software/hardware co-design approach to reduce both the difficulty and time to implement Field Programmable Gate Array (FPGA) solutions to data acquisition and specialized computational applications. FPGAs can require excessive time for programming and require specialized knowledge that will be greatly reduced by the company's solution. Not only are FPGAs ideal for DAQ and embedded solutions, they can also be the best solution to specialized signal processing to replace Digital Signal Processors (DSPs). By allowing FPGA programming to be done in C with the equivalent of a simple compilation, algorithm changes and improvements can be implemented decreasing the life-cycle costs and allow subsitution of new FPGA designs staying above the technological details.

  20. A user-interactive, response surface approximation-based framework for multidisciplinary design

    NASA Astrophysics Data System (ADS)

    Stelmack, Marc Andrew

    Multidisciplinary Design Optimization (MDO) focuses on reducing the time and cost required to design complex engineering systems. One goal of MDO is to develop systematic approaches to design which are effective and reliable in achieving desired performance improvements. Also, the analysis of engineering systems is potentially expensive and time-consuming. Therefore, enhancing the efficiency of current design methods, in terms of the number of designs that must be evaluated, is desirable. A design framework, Concurrent Subspace Design (CSD), is proposed to address some issues that are prevalent in practical design settings. Previously considered methods of system approximation and optimization were extended to accommodate both discrete and continuous design variables. Additionally, the transition from one application to another was made to be as straightforward as possible in developing the associated software. Engineering design generally requires the expertise of numerous individuals, whose efforts must be focused on and coordinated in accordance with a consistent set of design goals. In CSD, system approximations in the form of artificial neural networks provide information pertaining to system performance characteristics. This information provides the basis for design decisions. The approximations enable different designers to operate concurrently and assess the impact of their decisions on the system design goals. The proposed framework was implemented to minimize the weight of an aircraft brake assembly. An existing industrial analysis tool was used to provide design information in that application. CSD was implemented in a user-interactive fashion that permitted human judgement to influence the design process and required minimal modifications to the analysis and design software. The implications of problem formulation and the role of human design experts in automated industrial design processes were explored in the context of that application. In the most

  1. A general framework of marker design with optimal allocation to assess clinical utility.

    PubMed

    Tang, Liansheng; Zhou, Xiao-Hua

    2013-02-20

    This paper proposes a general framework of marker validation designs, which includes most of existing validation designs. The sample size calculation formulas for the proposed general design are derived on the basis of the optimal allocation that minimizes the expected number of treatment failures. The optimal allocation is especially important in the targeted design which is often motivated by preliminary evidence that marker-positive patients respond to one treatment better than the other. Our sample size calculation also takes into account the classification error of a marker. The numerical studies are conducted to investigate the expected reduction on the treatment failures and the relative efficiency between the targeted design and the traditional design based on the optimal ratios. We illustrate the calculation of the optimal allocation and sample sizes through a hypothetical stage II colon cancer trial.

  2. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  3. A Robust and Reliability-Based Optimization Framework for Conceptual Aircraft Wing Design

    NASA Astrophysics Data System (ADS)

    Paiva, Ricardo Miguel

    A robustness and reliability based multidisciplinary analysis and optimization framework for aircraft design is presented. Robust design optimization and Reliability Based Design Optimization are merged into a unified formulation which streamlines the setup of optimization problems and aims at preventing foreseeable implementation issues in uncertainty based design. Surrogate models are evaluated to circumvent the intensive computations resulting from using direct evaluation in nondeterministic optimization. Three types of models are implemented in the framework: quadratic interpolation, regression Kriging and artificial neural networks. Regression Kriging presents the best compromise between performance and accuracy in deterministic wing design problems. The performance of the simultaneous implementation of robustness and reliability is evaluated using simple analytic problems and more complex wing design problems, revealing that performance benefits can still be achieved while satisfying probabilistic constraints rather than the simpler (and not as computationally intensive) robust constraints. The latter are proven to to be unable to follow a reliability constraint as uncertainty in the input variables increases. The computational effort of the reliability analysis is further reduced through the implementation of a coordinate change in the respective optimization sub-problem. The computational tool developed is a stand-alone application and it presents a user-friendly graphical user interface. The multidisciplinary analysis and design optimization tool includes modules for aerodynamics, structural, aeroelastic and cost analysis, that can be used either individually or coupled.

  4. Towards a European Framework to Monitor Infectious Diseases among Migrant Populations: Design and Applicability

    PubMed Central

    Riccardo, Flavia; Dente, Maria Grazia; Kärki, Tommi; Fabiani, Massimo; Napoli, Christian; Chiarenza, Antonio; Giorgi Rossi, Paolo; Velasco Munoz, Cesar; Noori, Teymur; Declich, Silvia

    2015-01-01

    There are limitations in our capacity to interpret point estimates and trends of infectious diseases occurring among diverse migrant populations living in the European Union/European Economic Area (EU/EEA). The aim of this study was to design a data collection framework that could capture information on factors associated with increased risk to infectious diseases in migrant populations in the EU/EEA. The authors defined factors associated with increased risk according to a multi-dimensional framework and performed a systematic literature review in order to identify whether those factors well reflected the reported risk factors for infectious disease in these populations. Following this, the feasibility of applying this framework to relevant available EU/EEA data sources was assessed. The proposed multidimensional framework is well suited to capture the complexity and concurrence of these risk factors and in principle applicable in the EU/EEA. The authors conclude that adopting a multi-dimensional framework to monitor infectious diseases could favor the disaggregated collection and analysis of migrant health data. PMID:26393623

  5. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  6. Study, design and integration of an FPGA-based system for the time-of-flight calculation applied to PET equipment

    NASA Astrophysics Data System (ADS)

    Aguilar Talens, D. Albert

    , the initial time measurement results are presented, achieving time resolutions below 100 ps for multiple channels. Once characterized, the system is tested with a breast PET prototype, whose technology detectors are based on Position Sensitive PhotoMultiplier Tubes (PSPMTs), performing TOF measurements for different scenarios. After this point, tests based on two Silicon Photomultipliers (SiPMs) modules were carried out. SiPMs are immune to magnetic fields, among other advantages. This is an important feature since there is a significant interest in combining PET and Magnetic Resonances (MR). Each of the two detector modules used are composed of a single crystal pixel. The electronic conditioning circuits are designed, taking into account the most influential parameters in time resolution. After these results, an array of 144 SiPMs is tested, optimizing several parameters, which directly impact on the system performance. Having demonstrated the system capabilities, an optimization process is devised. On the one hand, TDC measurements are enhanced up to 40 ps of precision. On the other hand, a coincidence algorithm is developed, which is responsible of identifying detector pairs that have registered an event within certain time window. Finally, the Thesis conclusions and the future work are presented, followed by the references. A list of publications and attended congresses are also provided.

  7. SAD5 Stereo Correlation Line-Striping in an FPGA

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopoulos, Arin C.

    2011-01-01

    High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms

  8. A Strategic Approach to Curriculum Design for Information Literacy in Teacher Education--Implementing an Information Literacy Conceptual Framework

    ERIC Educational Resources Information Center

    Klebansky, Anna; Fraser, Sharon P.

    2013-01-01

    This paper details a conceptual framework that situates curriculum design for information literacy and lifelong learning, through a cohesive developmental information literacy based model for learning, at the core of teacher education courses at UTAS. The implementation of the framework facilitates curriculum design that systematically,…

  9. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  10. Guest molecules as a design element for metal–organic frameworks

    SciTech Connect

    Allendorf, Mark D.; Medishetty, Raghavender; Fischer, Roland A.

    2016-11-07

    The well-known synthetic versatility of MOFs is rooted in the ability to predict the metal ion coordination geometry and the vast possibilities to use organic chemistry to modify the linker groups. However, the use of “non-innocent” guest molecules as a component of framework design has been largely ignored. Nevertheless, recent reports show that the presence of guest molecules can have dramatic effects, even when these are seemingly innocuous species such as water or polar solvents. Advantages of using guests to impart new properties to MOFs include the relative ease of introducing new functionalities, the ability to modify the properties material at will by removing the guest or inserting different ones, and avoidance of the difficulties associated with synthesizing new frameworks, which can be challenging even when the basic topology remains constant. In this article we describe the “Guest@MOF” concept and provide examples illustrating its potential as a new MOF design element.

  11. A conceptual curriculum framework designed to ensure quality student health visitor training in practice.

    PubMed

    Hollinshead, Jayne; Stirling, Linda

    2014-07-01

    This paper describes the challenges faced by a trust in England following the introduction of the Health Visitor Implementation Plan. Two practice education facilitators designed a conceptual curriculum framework to ensure quality student health visitor education in practice. This curriculum complimented the excellent academic course already delivered by the University. A justification is provided for the design of the curriculum framework, including a rationale for the introduction of specific training sessions. Student and practice teacher feedback demonstrate the success of the introduction of this programme to ensure the development of student health visitors fit for practice. The conclusion places emphasis on the importance of continuous evaluation of the training programme to meet the needs of the students and the service.

  12. OPENCORE NMR: open-source core modules for implementing an integrated FPGA-based NMR spectrometer.

    PubMed

    Takeda, Kazuyuki

    2008-06-01

    A tool kit for implementing an integrated FPGA-based NMR spectrometer [K. Takeda, A highly integrated FPGA-based nuclear magnetic resonance spectrometer, Rev. Sci. Instrum. 78 (2007) 033103], referred to as the OPENCORE NMR spectrometer, is open to public. The system is composed of an FPGA chip and several peripheral boards for USB communication, direct-digital synthesis (DDS), RF transmission, signal acquisition, etc. Inside the FPGA chip have been implemented a number of digital modules including three pulse programmers, the digital part of DDS, a digital quadrature demodulator, dual digital low-pass filters, and a PC interface. These FPGA core modules are written in VHDL, and their source codes are available on our website. This work aims at providing sufficient information with which one can, given some facility in circuit board manufacturing, reproduce the OPENCORE NMR spectrometer presented here. Also, the users are encouraged to modify the design of spectrometer according to their own specific needs. A home-built NMR spectrometer can serve complementary roles to a sophisticated commercial spectrometer, should one comes across such new ideas that require heavy modification to hardware inside the spectrometer. This work can lower the barrier of building a handmade NMR spectrometer in the laboratory, and promote novel and exciting NMR experiments.

  13. OPENCORE NMR: Open-source core modules for implementing an integrated FPGA-based NMR spectrometer

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuyuki

    2008-06-01

    A tool kit for implementing an integrated FPGA-based NMR spectrometer [K. Takeda, A highly integrated FPGA-based nuclear magnetic resonance spectrometer, Rev. Sci. Instrum. 78 (2007) 033103], referred to as the OPENCORE NMR spectrometer, is open to public. The system is composed of an FPGA chip and several peripheral boards for USB communication, direct-digital synthesis (DDS), RF transmission, signal acquisition, etc. Inside the FPGA chip have been implemented a number of digital modules including three pulse programmers, the digital part of DDS, a digital quadrature demodulator, dual digital low-pass filters, and a PC interface. These FPGA core modules are written in VHDL, and their source codes are available on our website. This work aims at providing sufficient information with which one can, given some facility in circuit board manufacturing, reproduce the OPENCORE NMR spectrometer presented here. Also, the users are encouraged to modify the design of spectrometer according to their own specific needs. A home-built NMR spectrometer can serve complementary roles to a sophisticated commercial spectrometer, should one comes across such new ideas that require heavy modification to hardware inside the spectrometer. This work can lower the barrier of building a handmade NMR spectrometer in the laboratory, and promote novel and exciting NMR experiments.

  14. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane

  15. Analysing task design and students' responses to context-based problems through different analytical frameworks

    NASA Astrophysics Data System (ADS)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  16. Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R.

    2014-01-01

    In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…

  17. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.

  18. Alternate metal framework designs for the metal ceramic prosthesis to enhance the esthetics

    PubMed Central

    Vernekar, Naina Vilas; Jagadish, Prithviraj Kallahalla; Diwakar, Srinivasan; Nadgir, Ramesh

    2011-01-01

    PURPOSE The objective of the present study was to evaluate the effect of five different metal framework designs on the fracture resistance of the metal-ceramic restorations. MATERIALS AND METHODS For the purpose of this study, the central incisor tooth was prepared, and the metal analogue of it and a master die were fabricated. The counter die with the 0.5 mm clearance was used for fabricating the wax patterns for the metal copings. The metal copings with five different metal framework designs were designed from Group 1 to 5. Group 1 with the metal collar, Group 2, 3, 4 and 5 with 0 mm, 0.5 mm, 1 mm and 1.5 mm cervical metal reduction respectively were fabricated. Total of fifty metal ceramic crown samples were fabricated. The fracture resistance was evaluated with the Universal Testing Machine (Instron model No 1011, UK). The basic data was subjected to statistical analysis by ANOVA and Student's t-test. RESULTS Results revealed that the fracture resistance ranged from 651.2 to 993.6 N/m2. Group 1 showed the maximum and Group 5 showed the least value. CONCLUSION The maximum load required to fracture the test specimens even in the groups without the metal collar was found to be exceeding the occlusal forces. Therefore, the metal frameworks with 0.5 mm and 1 mm short of the finish line are recommended for anterior metal ceramic restoration having adequate fracture resistance. PMID:22053240

  19. An FPGA-based rapid prototyping platform for wavelet coprocessors

    NASA Astrophysics Data System (ADS)

    Vera, Alonzo; Meyer-Baese, Uwe; Pattichis, Marios

    2007-04-01

    MatLab/Simulink-based design flows are being used by DSP designers to improve time-to-market of FPGA implementations. 1 Commonly, digital signal processing cores are integrated in an embedded system as coprocessors. Existing CAD tools do not fully address the integration of a DSP coprocessor into an embedded system design. This integration might prove to be time consuming and error prone. It also requires that the DSP designer has an excellent knowledge of embedded systems and computer architecture details. We present a prototyping platform and design flow that allows rapid integration of embedded systems with a wavelet coprocessor. The platform comprises of software and hardware modules that allow a DSP designer a painless integration of a coprocessor with a PowerPC-based embedded system. The platform has a wide range of applications, from industrial to educational environments.

  20. An FPGA-based ultrasound imaging system using capacitive micromachined ultrasonic transducers.

    PubMed

    Wong, Lawrence L P; Chen, Albert I; Logan, Andrew S; Yeow, John T W

    2012-07-01

    We report the design and experimental results of a field-programmable gate array (FPGA)-based real-time ultrasound imaging system that uses a 16-element phased-array capacitive micromachined ultrasonic transducer fabricated using a fusion bonding process. The imaging system consists of the transducer, discrete analog components situated on a custom-made circuit board, the FPGA, and a monitor. The FPGA program consists of five functional blocks: a main counter, transmit and receive beamformer, receive signal pre-processing, envelope detection, and display. No dedicated digital signal processor or personal computer is required for the imaging system. An experiment is carried out to obtain the sector B-scan of a 4-wire target. The ultrasound imaging system demonstrates the possibility of an integrated system-in-a-package solution.

  1. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  2. The FPGA based L1 track finding Tracklet approach

    NASA Astrophysics Data System (ADS)

    Kyriacou, Savvas; CMS Collaboration

    2017-01-01

    The High Luminosity upgraded LHC is expected to deliver proton-proton collisions per 25ns with an estimated 140-200 pile up interactions per bunch crossing. Ultrafast track finding is vital for handling trigger rates in such conditions. An FPGA based road search algorithm is developed, the Tracklet approach one of a few currently under consideration, for the CMS L1 trigger system. Based on low/high transverse momentum track discrimination and designed for the HL upgraded outer tracker, the algorithm achieves microsecond scale track reconstruction in the expected high track multiplicity environment. The Tracklet method overview, implementation, hardware demonstrator and performance results are presented and discussed.

  3. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  4. Covalent organic frameworks: a materials platform for structural and functional designs

    NASA Astrophysics Data System (ADS)

    Huang, Ning; Wang, Ping; Jiang, Donglin

    2016-10-01

    Covalent organic frameworks (COFs) are a class of crystalline porous polymer that allows the atomically precise integration of organic units into extended structures with periodic skeletons and ordered nanopores. One important feature of COFs is that they are designable; that is, the geometry and dimensions of the building blocks can be controlled to direct the topological evolution of structural periodicity. The diversity of building blocks and covalent linkage topology schemes make COFs an emerging materials platform for structural control and functional design. Indeed, COF architectures offer confined molecular spaces for the interplay of photons, excitons, electrons, holes, ions and guest molecules, thereby exhibiting unique properties and functions. In this Review, we summarize the major progress in the field of COFs and recent achievements in developing new design principles and synthetic strategies. We highlight cutting-edge functional designs and identify fundamental issues that need to be addressed in conjunction with future research directions from chemistry, physics and materials perspectives.

  5. PRISM framework: a paradigm shift for designing, strengthening and evaluating routine health information systems

    PubMed Central

    Aqil, Anwer; Lippeveld, Theo; Hozumi, Dairiku

    2009-01-01

    The utility and effectiveness of routine health information systems (RHIS) in improving health system performance in developing countries has been questioned. This paper argues that the health system needs internal mechanisms to develop performance targets, track progress, and create and manage knowledge for continuous improvement. Based on documented RHIS weaknesses, we have developed the Performance of Routine Information System Management (PRISM) framework, an innovative approach to design, strengthen and evaluate RHIS. The PRISM framework offers a paradigm shift by putting emphasis on RHIS performance and incorporating the organizational, technical and behavioural determinants of performance. By describing causal pathways of these determinants, the PRISM framework encourages and guides the development of interventions for strengthening or reforming RHIS. Furthermore, it conceptualizes and proposes a methodology for measuring the impact of RHIS on health system performance. Ultimately, the PRISM framework, in spite of its challenges and competing paradigms, proposes a new agenda for building and sustaining information systems, for the promotion of an information culture, and for encouraging accountability in health systems. PMID:19304786

  6. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  7. Report of the Odyssey FPGA Independent Assessment Team

    NASA Technical Reports Server (NTRS)

    Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)

    2001-01-01

    An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.

  8. A framework for evaluating and designing citizen science programs for natural resources monitoring.

    PubMed

    Chase, Sarah K; Levine, Arielle

    2016-06-01

    We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring.

  9. Design of additive quantum codes via the code-word-stabilized framework

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Dumer, Ilya; Pryadko, Leonid P.

    2011-12-01

    We consider design of the quantum stabilizer codes via a two-step, low-complexity approach based on the framework of codeword-stabilized (CWS) codes. In this framework, each quantum CWS code can be specified by a graph and a binary code. For codes that can be obtained from a given graph, we give several upper bounds on the distance of a generic (additive or nonadditive) CWS code, and the lower Gilbert-Varshamov bound for the existence of additive CWS codes. We also consider additive cyclic CWS codes and show that these codes correspond to a previously unexplored class of single-generator cyclic stabilizer codes. We present several families of simple stabilizer codes with relatively good parameters.

  10. Molecular docking sites designed for the generation of highly crystalline covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Ascherl, Laura; Sick, Torben; Margraf, Johannes T.; Lapidus, Saul H.; Calik, Mona; Hettstedt, Christina; Karaghiosoff, Konstantin; Döblinger, Markus; Clark, Timothy; Chapman, Karena W.; Auras, Florian; Bein, Thomas

    2016-04-01

    Covalent organic frameworks (COFs) formed by connecting multidentate organic building blocks through covalent bonds provide a platform for designing multifunctional porous materials with atomic precision. As they are promising materials for applications in optoelectronics, they would benefit from a maximum degree of long-range order within the framework, which has remained a major challenge. We have developed a synthetic concept to allow consecutive COF sheets to lock in position during crystal growth, and thus minimize the occurrence of stacking faults and dislocations. Hereby, the three-dimensional conformation of propeller-shaped molecular building units was used to generate well-defined periodic docking sites, which guided the attachment of successive building blocks that, in turn, promoted long-range order during COF formation. This approach enables us to achieve a very high crystallinity for a series of COFs that comprise tri- and tetradentate central building blocks. We expect this strategy to be transferable to a broad range of customized COFs.

  11. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  12. Metal-organic Frameworks as A Tunable Platform for Designing Functional Molecular Materials

    PubMed Central

    Wang, Cheng; Liu, Demin

    2013-01-01

    Metal-organic frameworks (MOFs), also known as coordination polymers, represent an interesting class of crystalline molecular materials that are synthesized by combining metal-connecting points and bridging ligands. The modular nature of and mild conditions for MOF synthesis have permitted the rational structural design of numerous MOFs and the incorporation of various functionalities via constituent building blocks. The resulting designer MOFs have shown promise for applications in a number of areas, including gas storage/separation, nonlinear optics/ferroelectricity, catalysis, energy conversion/storage, chemical sensing, biomedical imaging, and drug delivery. The structure-property relationships of MOFs can also be readily established by taking advantage of the knowledge of their detailed atomic structures, which enables fine-tuning of their functionalities for desired applications. Through the combination of molecular synthesis and crystal engineering MOFs thus present an unprecedented opportunity for the rational and precise design of functional materials. PMID:23944646

  13. Fast analysis of glibenclamide and its impurities: quality by design framework in capillary electrophoresis method development.

    PubMed

    Furlanetto, Sandra; Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Mura, Paola; Pinzauti, Sergio

    2015-10-01

    A fast capillary zone electrophoresis method for the simultaneous analysis of glibenclamide and its impurities (I(A) and I(B)) in pharmaceutical dosage forms was fully developed within a quality by design framework. Critical quality attributes were represented by I(A) peak efficiency, critical resolution between glibenclamide and I(B), and analysis time. Experimental design was efficiently used for rapid and systematic method optimization. A 3(5)//16 symmetric screening matrix was chosen for investigation of the five selected critical process parameters throughout the knowledge space, and the results obtained were the basis for the planning of the subsequent response surface study. A Box-Behnken design for three factors allowed the contour plots to be drawn and the design space to be identified by introduction of the concept of probability. The design space corresponded to the multidimensional region where all the critical quality attributes reached the desired values with a degree of probability π ≥ 90%. Under the selected working conditions, the full separation of the analytes was obtained in less than 2 min. A full factorial design simultaneously allowed the design space to be validated and method robustness to be tested. A control strategy was finally implemented by means of a system suitability test. The method was fully validated and was applied to real samples of glibenclamide tablets.

  14. Development of a multitechnology FPGA: a reconfigurable architecture for photonic information processing

    NASA Astrophysics Data System (ADS)

    Mal, Prosenjit; Toshniwal, Kavita; Hawk, Chris; Bhadri, Prashant R.; Beyette, Fred R., Jr.

    2004-06-01

    Over the years, Field Programmable Gate Arrays (FPGAs) have made a profound impact on the electronics industry with rapidly improving semiconductor-manufacturing technology ranging from sub-micron to deep sub-micron processes and equally innovative CAD tools. Though FPGA has revolutionized programmable/reconfigurable digital logic technology, one limitation of current FPGA"s is that the user is limited to strictly electronic designs. Thus, they are not suitable for applications that are not purely electronic, such as optical communications, photonic information processing systems and other multi-technology applications (ex. analog devices, MEMS devices and microwave components). Over recent years, the growing trend has been towards the incorporation of non-traditional device technologies into traditional CMOS VLSI systems. The integration of these technologies requires a new kind of FPGA that can merge conventional FPGA technology with photonic and other multi-technology devices. The proposed new class of field programmable device will extend the flexibility, rapid prototyping and reusability benefits associated with conventional electronic into photonic and multi-technology domain and give rise to the development of a wider class of programmable and embedded integrated systems. This new technology will create a tremendous opportunity for applying the conventional programmable/reconfigurable hardware concepts in other disciplines like photonic information processing. To substantiate this novel architectural concept, we have fabricated proof-of-the-concept CMOS VLSI Multi-technology FPGA (MT-FPGA) chips that include both digital field programmable logic blocks and threshold programmable photoreceivers which are suitable for sensing optical signals. Results from these chips strongly support the feasibility of this new optoelectronic device concept.

  15. Guiding the Design of Lessons by Using the MAPLET Framework: Matching Aims, Processes, Learner Expertise and Technologies

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Gosper, Maree

    2014-01-01

    This paper introduces the MAPLET framework that was developed to map and link teaching aims, learning processes, learner expertise and technologies. An experimental study with 65 participants is reported to test the effectiveness of the framework as a guide to the design of lessons embedded within larger units of study. The findings indicate the…

  16. Preliminary Reading Literacy Assessment Framework: Foundation and Rationale for Assessment and System Design. Research Report. ETS RR-13-30

    ERIC Educational Resources Information Center

    Sabatini, John; O'Reilly, Tenaha; Deane, Paul

    2013-01-01

    This report describes the foundation and rationale for a framework designed to measure reading literacy. The aim of the effort is to build an assessment system that reflects current theoretical conceptions of reading and is developmentally sensitive across a prekindergarten to 12th grade student range. The assessment framework is intended to…

  17. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  18. FHAST: FPGA-Based Acceleration of Bowtie in Hardware.

    PubMed

    Fernandez, Edward B; Villarreal, Jason; Lonardi, Stefano; Najjar, Walid A

    2015-01-01

    While the sequencing capability of modern instruments continues to increase exponentially, the computational problem of mapping short sequenced reads to a reference genome still constitutes a bottleneck in the analysis pipeline. A variety of mapping tools (e.g., Bowtie, BWA) is available for general-purpose computer architectures. These tools can take many hours or even days to deliver mapping results, depending on the number of input reads, the size of the reference genome and the number of allowed mismatches or insertion/deletions, making the mapping problem an ideal candidate for hardware acceleration. In this paper, we present FHAST (FPGA hardware accelerated sequence-matching tool), a drop-in replacement for Bowtie that uses a hardware design based on field programmable gate arrays (FPGA). Our architecture masks memory latency by executing multiple concurrent hardware threads accessing memory simultaneously. FHAST is composed by multiple parallel engines to exploit the parallelism available to us on an FPGA. We have implemented and tested FHAST on the Convey HC-1 and later ported on the Convey HC-2ex, taking advantage of the large memory bandwidth available to these systems and the shared memory image between hardware and software. A preliminary version of FHAST running on the Convey HC-1 achieved up to 70x speedup compared to Bowtie (single-threaded). An improved version of FHAST running on the Convey HC-2ex FPGAs achieved up to 12x fold speed gain compared to Bowtie running eight threads on an eight-core conventional architecture, while maintaining almost identical mapping accuracy. FHAST is a drop-in replacement for Bowtie, so it can be incorporated in any analysis pipeline that uses Bowtie (e.g., TopHat).

  19. Climate services for society: origins, institutional arrangements, and design elements for an evaluation framework

    PubMed Central

    Vaughan, Catherine; Dessai, Suraje

    2014-01-01

    Climate services involve the generation, provision, and contextualization of information and knowledge derived from climate research for decision making at all levels of society. These services are mainly targeted at informing adaptation to climate variability and change, widely recognized as an important challenge for sustainable development. This paper reviews the development of climate services, beginning with a historical overview, a short summary of improvements in climate information, and a description of the recent surge of interest in climate service development including, for example, the Global Framework for Climate Services, implemented by the World Meteorological Organization in October 2012. It also reviews institutional arrangements of selected emerging climate services across local, national, regional, and international scales. By synthesizing existing literature, the paper proposes four design elements of a climate services evaluation framework. These design elements include: problem identification and the decision-making context; the characteristics, tailoring, and dissemination of the climate information; the governance and structure of the service, including the process by which it is developed; and the socioeconomic value of the service. The design elements are intended to serve as a guide to organize future work regarding the evaluation of when and whether climate services are more or less successful. The paper concludes by identifying future research questions regarding the institutional arrangements that support climate services and nascent efforts to evaluate them. PMID:25798197

  20. Integrated circuit debug through FPGA emulation: application to a PIC-18 macrocell

    NASA Astrophysics Data System (ADS)

    Garcia-Valderas, Mario; de la Torre-Arnanz, Eduardo; Casado-Ortiz, Fernando; Entrena-Arrontes, Luis; Riesgo-Alcaide, Teresa

    2005-06-01

    FPGA emulation has become a common way to check if a digital circuit has been correctly designed. Although in the last years FPGA vendors have developed tools to embed logic analysers along with circuits in FPGAs, like Chipscope ILA from Xilinx, FPGA emulation still lacks the availability of more effective and versatile debug methods and tools. In order to check microprocessor system designs, several approaches have been used, including several combinations of logic simulators, instruction simulators, hardware emulators and in-circuit emulators. Nowadays, System-On-Chip design requires the implementation of microprocessor cores in FPGAs for prototyping. These cores do not usually include built-in debug features. In this paper, methods and tools for the development and operation of FPGA debug features are presented. Debug features are implemented in FPGAs through the insertion of JTAG accessible debug modules into the target design. The debug modules that have already been designed offer features that range from simple event detection and signal monitoring to the most powerful and resource consuming, like tracing, complex event and sequence detection and microprocessor in-circuit emulation. The most important properties of the presented debug features are their high configurability, which allow adjusting them to available logic resources, remote control of debug logic and expandability by means of user customized debug blocks. Tools have been developed to automate the required tasks: debug logic selection and configuration, debug logic insertion and debug logic operation. The proposed methods and tools have been applied to a microprocessor system based on a PIC-18 macrocell and implemented in a Xilinx Spartan-3 FPGA.

  1. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  2. Design, implementation and validation of a novel open framework for agile development of mobile health applications

    PubMed Central

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions. PMID:26329639

  3. Design, implementation and validation of a novel open framework for agile development of mobile health applications.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Garcia, Rafael; Saez, Alejandro; Damas, Miguel; Holgado-Terriza, Juan A; Lee, Sungyong; Pomares, Hector; Rojas, Ignacio

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions.

  4. A taxonomy of apatite frameworks for the crystal chemical design of fuel cell electrolytes

    SciTech Connect

    Pramana, Stevin S.; Klooster, Wim T.; White, Timothy J.

    2008-08-15

    Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Taking the neutral apatite [La{sub 8}Sr{sub 2}][(GeO{sub 4}){sub 6}]O{sub 2} as a prototype electrolyte, this classification scheme correctly predicted that 'excess' oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}, rather than the presumptive tunnel location of [La{sub 9}Sr][(GeO{sub 4}){sub 6}]O{sub 2.5}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed, with potentially superior low-temperature ion conduction, mediated by the introduction of oxygen to the framework reservoir. - Graphical abstract: Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Neutron diffraction identified that the excess oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed.

  5. Porting of an FPGA Based High Data Rate DVB-S2 Modulator

    DTIC Science & Technology

    2011-06-13

    maximum portability and scalability. The VHDL and software were partitioned such that all time critical signal processing was handled by the FPG A. It...The VHDL Design The HDR DVB-S2 VHDL design was partitioned into small manageable waveform specific-modules and FPGA-specific modules. The intention...S2 modulator. The portion of the design located within the Core Transmitter box represents the waveform specific modules. The HDR DVB-S2 VHDL

  6. Testing Microshutter Arrays Using Commercial FPGA Hardware

    NASA Technical Reports Server (NTRS)

    Rapchun, David

    2008-01-01

    NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.

  7. FPGA Sequencer for Radar Altimeter Applications

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew C.; Pollard, Brian D.; Chen, Curtis W.

    2011-01-01

    A sequencer for a radar altimeter provides accurate attitude information for a reliable soft landing of the Mars Science Laboratory (MSL). This is a field-programmable- gate-array (FPGA)-only implementation. A table loaded externally into the FPGA controls timing, processing, and decision structures. Radar is memory-less and does not use previous acquisitions to assist in the current acquisition. All cycles complete in exactly 50 milliseconds, regardless of range or whether a target was found. A RAM (random access memory) within the FPGA holds instructions for up to 15 sets. For each set, timing is run, echoes are processed, and a comparison is made. If a target is seen, more detailed processing is run on that set. If no target is seen, the next set is tried. When all sets have been run, the FPGA terminates and waits for the next 50-millisecond event. This setup simplifies testing and improves reliability. A single vertex chip does the work of an entire assembly. Output products require minor processing to become range and velocity. This technology is the heart of the Terminal Descent Sensor, which is an integral part of the Entry Decent and Landing system for MSL. In addition, it is a strong candidate for manned landings on Mars or the Moon.

  8. Experiences on 64 and 150 FPGA Systems

    SciTech Connect

    Storaasli, Olaf O; Strenski, Dave

    2008-01-01

    Four FPGA systems were evaluated: the Cray XD1 system with 6 FPGAs at ORNL and Cray, the Cray XD1 system with 150 FPGAs at NRL* and the 64 FPGAs on Edinburgh s Maxwell . Their hardware and software architectures, programming tools and performance on scientific applications are discussed. FPGA speedup (over a 2.2 GHz Opteron) of 10X was typical for matrix equation solution, molecular dynamics and weather/climate codes and upto 100X for human genome DNA sequencing. Large genome comparisons requiring 12.5 years for an Opteron took less than 24 hours on NRL s Cray XD1 with 150 Virtex FPGAs for a 7,350X speedup. pipeline so each query and database character are compared in parallel, resulting in a table of scores. Genome Sequencing Results: FPGA timing results (for up to 150 FPGAs) were obtained and compared with up to 150 Opterons for sequences of varying size and complexity (e.g. 4GB openfpga.org human DNA benchmark and 155M human vs. 166M mouse DNA). 1 FPGA: Bacillus_anthracis DNA compare: Genomes

  9. Systematic review of enriched enrolment, randomised withdrawal trial designs in chronic pain: a new framework for design and reporting.

    PubMed

    Moore, R Andrew; Wiffen, Philip J; Eccleston, Christopher; Derry, Sheena; Baron, Ralf; Bell, Rae F; Furlan, Andrea D; Gilron, Ian; Haroutounian, Simon; Katz, Nathaniel P; Lipman, Arthur G; Morley, Stephen; Peloso, Paul M; Quessy, Steve N; Seers, Kate; Strassels, Scott A; Straube, Sebastian

    2015-08-01

    Enriched enrolment, randomised withdrawal (EERW) pain trials select, before randomisation, patients who respond by demonstrating a predetermined degree of pain relief and acceptance of adverse events. There is uncertainty over the value of this design. We report a systematic review of EERW trials in chronic noncancer pain together with a critical appraisal of methods and potential biases in the methods used and recommendations for the design and reporting of future EERW trials. Electronic and other searches found 25 EERW trials published between 1995 and June 2014, involving 5669 patients in a randomised withdrawal phase comparing drug with placebo; 13 (median, 107 patients) had a randomised withdrawal phase of 6 weeks or less, and 12 (median, 334) lasted 12 to 26 weeks. Risks of bias included short duration, inadequate outcome definition, incomplete outcome data reporting, small size, and inadequate dose tapering on randomisation to placebo. Active treatment was usually better than placebo (22/25 trials). This review reduces the uncertainty around the value of EERW trials in pain. If properly designed, conducted, and reported, they are feasible and useful for making decisions about pain therapies. Shorter, small studies can be explanatory; longer, larger studies can inform practice. Current evidence is inadequate for valid comparisons in outcome between EERW and classical trials, although no gross differences were found. This systematic review provides a framework for assessing potential biases and the value of the EERW trials, and for the design of future studies by making recommendations for the conduct and reporting of EERW trials.

  10. FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography.

    PubMed

    Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan

    2013-03-01

    Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments' PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application.

  11. FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography

    PubMed Central

    Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan

    2014-01-01

    Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments’ PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application. PMID:24729790

  12. Disseminating maternal health information to rural women: a user centered design framework.

    PubMed

    Parmar, Vikram

    2010-11-13

    The delivery of primary health information to rural women is a considerable challenge for government and private sectors in rural India. This paper illustrates how by applying the proposed user centered framework dissemination of maternal health information to rural women can be improved. First, the paper presents baseline study to obtain existing knowledge level of women and design requirements for a Primary Health Information System (PHIS). Second, the paper presents a brief description of the PHIS system which was deployed in a village for sixteen months in rural India. Third, the paper explains longitudinal study conducted post intervention of PHIS to measure the impact of PHIS on the knowledge level and health behaviour of rural women in comparison to the baseline study. The results indicate that by following the proposed user centered approach to design the PHIS, a significant improvement in knowledge level of rural women and positive changes in health practices are achieved.

  13. Expanding lean thinking to the product and process design and development within the framework of sustainability

    NASA Astrophysics Data System (ADS)

    Sorli, M.; Sopelana, A.; Salgado, M.; Pelaez, G.; Ares, E.

    2012-04-01

    Companies require tools to change towards a new way of developing and producing innovative products to be manufactured considering the economic, social and environmental impact along the product life cycle. Based on translating Lean principles in Product Development (PD) from the design stage and, along the entire product life cycle, it is aimed to address both sustainability and environmental issues. The drivers of sustainable culture within a lean PD have been identified and a baseline for future research on the development of appropriate tools and techniques has been provided. This research provide industry with a framework which balance environmental and sustainable factors with lean principles to be considered and incorporated from the beginning of product design and development covering the entire product lifecycle.

  14. SBROME: a scalable optimization and module matching framework for automated biosystems design.

    PubMed

    Huynh, Linh; Tsoukalas, Athanasios; Köppe, Matthias; Tagkopoulos, Ilias

    2013-05-17

    The development of a scalable framework for biodesign automation is a formidable challenge given the expected increase in part availability and the ever-growing complexity of synthetic circuits. To allow for (a) the use of previously constructed and characterized circuits or modules and (b) the implementation of designs that can scale up to hundreds of nodes, we here propose a divide-and-conquer Synthetic Biology Reusable Optimization Methodology (SBROME). An abstract user-defined circuit is first transformed and matched against a module database that incorporates circuits that have previously been experimentally characterized. Then the resulting circuit is decomposed to subcircuits that are populated with the set of parts that best approximate the desired function. Finally, all subcircuits are subsequently characterized and deposited back to the module database for future reuse. We successfully applied SBROME toward two alternative designs of a modular 3-input multiplexer that utilize pre-existing logic gates and characterized biological parts.

  15. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  16. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  17. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies

    NASA Astrophysics Data System (ADS)

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-07-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm-2, which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles.

  18. Design of a framework for modeling, integration and simulation of physiological models.

    PubMed

    Erson, E; Cavusoglu, M

    2010-01-01

    Modeling and simulation of physiological processes deal with the challenges of multiscale models in which coupling is very high within and among scales. Information technology approaches together with related analytical and computational tools will help to deal with these challenges. Physiological Model Simulation, Integration and Modeling Framework, Phy-SIM, provides the modeling environment which will help to cultivate various approaches to deal with the inherent problem of multiscale modeling of physiological systems. In this paper, we present the modular design of Phy-SIM. The proposed layered design of Phy-SIM, separates structure from function in physiological processes advocating modular thinking in developing and integrating physiological models. Moreover, the ontology based architecture will improve the modeling process by the mechanisms to attach anatomical and physiological ontological information to the models. The ultimate aim of the proposed approaches is to enhance the physiological model development and integration processes by providing the tools and mechanisms in Phy-SIM.

  19. A Framework of Working Across Disciplines in Early Design and R&D of Large Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.

    2015-01-01

    This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.

  20. Research and development of infrared object detection system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; He, Jianwei; Wang, Pengpeng; Li, Fan

    2009-07-01

    Infrared object detection is an important technique of digital image processing. It is widely used in automatic navigation, intelligent video surveillance systems, traffic detection, medical image processing etc. Infrared object detection system requires large storage and high speed processing technology. The current development trend is the system which can be achieved by hardware in real-time with fewer operations and higher performance. As a main large-scale programmable specific integrated circuit, field programmable gate array (FPGA) can meet all the requirements of high speed image processing, with the characteristics of simple algorithm realization, easy programming, good portability and inheritability. So it could get better result by using FPGA to infrared object detection system. According to the requirements, the infrared object detection system is designed on FPGA. By analyzing some of the main algorithms of object detection, two new object detection algorithms called integral compare algorithm (ICA) and gradual approach centroid algorithm (GACA) are presented. The system design applying FPGA in hardware can implement high speed processing technology, which brings the advantage of both performance and flexibility. ICA is a new type of denoising algorithm with advantage of lower computation complexity and less execution time. What is more important is that this algorithm can be implemented in FPGA expediently. Base on image preprocessing of ICA, GACA brings high positioning precision with advantage of insensitivity to the initial value and fewer times of convergence iteration. The experiments indicate that the infrared object detection system can implement high speed infrared object detecting in real-time, with high antijamming ability and high precision. The progress of Verilog-HDL and its architecture are introduced in this paper. Considering the engineering application, this paper gives the particular design idea and the flow of this method

  1. Valuation-Based Framework for Considering Distributed Generation Photovoltaic Tariff Design: Preprint

    SciTech Connect

    Zinaman, O. R.; Darghouth, N. R.

    2015-02-01

    While an export tariff is only one element of a larger regulatory framework for distributed generation, we choose to focus on tariff design because of the significant impact this program design component has on the various flows of value among power sector stakeholders. In that context, this paper is organized into a series of steps that can be taken during the design of a DGPV export tariff design. To that end this paper outlines a holistic, high-level approach to the complex undertaking of DGPV tariff design, the crux of which is an iterative cost-benefit analysis process. We propose a multi-step progression that aims to promote transparent, focused, and informed dialogue on CBA study methodologies and assumptions. When studies are completed, the long-run marginal avoided cost of the DGPV program should be compared against the costs imposed on utilities and non-participating customers, recognizing that these can be defined differently depending on program objectives. The results of this comparison can then be weighed against other program objectives to formulate tariff options. Potential changes to tariff structures can be iteratively fed back into established analytical tools to inform further discussions.

  2. A supermolecular building approach for the design and construction of metal-organic frameworks.

    PubMed

    Guillerm, Vincent; Kim, Dongwook; Eubank, Jarrod F; Luebke, Ryan; Liu, Xinfang; Adil, Karim; Lah, Myoung Soo; Eddaoudi, Mohamed

    2014-08-21

    In this review, we describe two recently implemented conceptual approaches facilitating the design and deliberate construction of metal–organic frameworks (MOFs), namely supermolecular building block (SBB) and supermolecular building layer (SBL) approaches. Our main objective is to offer an appropriate means to assist/aid chemists and material designers alike to rationally construct desired functional MOF materials, made-to-order MOFs. We introduce the concept of net-coded building units (net-cBUs), where precise embedded geometrical information codes uniquely and matchlessly a selected net, as a compelling route for the rational design of MOFs. This concept is based on employing pre-selected 0-periodic metal–organic polyhedra or 2-periodic metal–organic layers, SBBs or SBLs respectively, as a pathway to access the requisite net-cBUs. In this review, inspired by our success with the original rht-MOF, we extrapolated our strategy to other known MOFs via their deconstruction into more elaborate building units (namely polyhedra or layers) to (i) elucidate the unique relationship between edge-transitive polyhedra or layers and minimal edge-transitive 3-periodic nets, and (ii) illustrate the potential of the SBB and SBL approaches as a rational pathway for the design and construction of 3-periodic MOFs. Using this design strategy, we have also identified several new hypothetical MOFs which are synthetically targetable.

  3. A framework for collecting inclusive design data for the UK population.

    PubMed

    Langdon, Pat; Johnson, Daniel; Huppert, Felicia; Clarkson, P John

    2015-01-01

    Successful inclusive product design requires knowledge about the capabilities, needs and aspirations of potential users and should cater for the different scenarios in which people will use products, systems and services. This should include: the individual at home; in the workplace; for businesses, and for products in these contexts. It needs to reflect the development of theory, tools and techniques as research moves on. And it must also to draw in wider psychological, social, and economic considerations in order to gain a more accurate understanding of users' interactions with products and technology. However, recent research suggests that although a number of national disability surveys have been carried out, no such knowledge currently exists as information to support the design of products, systems and services for heterogeneous users. This paper outlines the strategy behind specific inclusive design research that is aimed at creating the foundations for measuring inclusion in product designs. A key outcome of this future research will be specifying and operationalising capability, and psychological, social and economic context measures for inclusive design. This paper proposes a framework for capturing such information, describes an early pilot study, and makes recommendations for better practice.

  4. Guest molecules as a design element for metal–organic frameworks

    DOE PAGES

    Allendorf, Mark D.; Medishetty, Raghavender; Fischer, Roland A.

    2016-11-07

    The well-known synthetic versatility of MOFs is rooted in the ability to predict the metal ion coordination geometry and the vast possibilities to use organic chemistry to modify the linker groups. However, the use of “non-innocent” guest molecules as a component of framework design has been largely ignored. Nevertheless, recent reports show that the presence of guest molecules can have dramatic effects, even when these are seemingly innocuous species such as water or polar solvents. Advantages of using guests to impart new properties to MOFs include the relative ease of introducing new functionalities, the ability to modify the properties materialmore » at will by removing the guest or inserting different ones, and avoidance of the difficulties associated with synthesizing new frameworks, which can be challenging even when the basic topology remains constant. In this article we describe the “Guest@MOF” concept and provide examples illustrating its potential as a new MOF design element.« less

  5. Framework for Integrating Safety, Operations, Security, and Safeguards in the Design and Operation of Nuclear Facilities

    SciTech Connect

    Darby, John L.; Horak, Karl Emanuel; LaChance, Jeffrey L.; Tolk, Keith Michael; Whitehead, Donnie Wayne

    2007-10-01

    The US is currently on the brink of a nuclear renaissance that will result in near-term construction of new nuclear power plants. In addition, the Department of Energy’s (DOE) ambitious new Global Nuclear Energy Partnership (GNEP) program includes facilities for reprocessing spent nuclear fuel and reactors for transmuting safeguards material. The use of nuclear power and material has inherent safety, security, and safeguards (SSS) concerns that can impact the operation of the facilities. Recent concern over terrorist attacks and nuclear proliferation led to an increased emphasis on security and safeguard issues as well as the more traditional safety emphasis. To meet both domestic and international requirements, nuclear facilities include specific SSS measures that are identified and evaluated through the use of detailed analysis techniques. In the past, these individual assessments have not been integrated, which led to inefficient and costly design and operational requirements. This report provides a framework for a new paradigm where safety, operations, security, and safeguards (SOSS) are integrated into the design and operation of a new facility to decrease cost and increase effectiveness. Although the focus of this framework is on new nuclear facilities, most of the concepts could be applied to any new, high-risk facility.

  6. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  7. Design of a digital beam attenuation system for computed tomography: Part I. System design and simulation framework

    SciTech Connect

    Szczykutowicz, Timothy P.; Mistretta, Charles A.

    2013-02-15

    Purpose: The purpose of this work is to introduce a new device that allows for patient-specific imaging-dose modulation in conventional and cone-beam CT. The device is called a digital beam attenuator (DBA). The DBA modulates an x-ray beam by varying the attenuation of a set of attenuating wedge filters across the fan angle. The ability to modulate the imaging dose across the fan beam represents another stride in the direction of personalized medicine. With the DBA, imaging dose can be tailored for a given patient anatomy, or even tailored to provide signal-to-noise ratio enhancement within a region of interest. This modulation enables decreases in: dose, scatter, detector dynamic range requirements, and noise nonuniformities. In addition to introducing the DBA, the simulation framework used to study the DBA under different configurations is presented. Finally, a detailed study on the choice of the material used to build the DBA is presented. Methods: To change the attenuator thickness, the authors propose to use an overlapping wedge design. In this design, for each wedge pair, one wedge is held stationary and another wedge is moved over the stationary wedge. The composite thickness of the two wedges changes as a function of the amount of overlap between the wedges. To validate the DBA concept and study design changes, a simulation environment was constructed. The environment allows for changes to system geometry, different source spectra, DBA wedge design modifications, and supports both voxelized and analytic phantom models. A study of all the elements from atomic number 1 to 92 were evaluated for use as DBA filter material. The amount of dynamic range and tube loading for each element were calculated for various DBA designs. Tube loading was calculated by comparing the attenuation of the DBA at its minimum attenuation position to a filtered non-DBA acquisition. Results: The design and parametrization of DBA implemented FFMCT has been introduced. A simulation

  8. An FPGA-based open platform for ultrasound biomicroscopy.

    PubMed

    Qiu, Weibao; Yu, Yanyan; Tsang, Fu; Sun, Lei

    2012-07-01

    Ultrasound biomicroscopy (UBM) has been extensively applied to preclinical studies in small animal models. Individual animal study is unique and requires different utilization of the UBM system to accommodate different transducer characteristics, data acquisition strategies, signal processing, and image reconstruction methods. There is a demand for a flexible and open UBM platform to allow users to customize the system for various studies and have full access to experimental data. This paper presents the development of an open UBM platform (center frequency 20 to 80 MHz) for various preclinical studies. The platform design was based on a field-programmable gate array (FPGA) embedded in a printed circuit board to achieve B-mode imaging and directional pulsed-wave Doppler. Instead of hardware circuitry, most functions of the platform, such as filtering, envelope detection, and scan conversion, were achieved by FPGA programs; thus, the system architecture could be easily modified for specific applications. In addition, a novel digital quadrature demodulation algorithm was implemented for fast and accurate Doppler profiling. Finally, test results showed that the platform could offer a minimum detectable signal of 25 μV, allowing a 51 dB dynamic range at 47 dB gain, and real-time imaging at more than 500 frames/s. Phantom and in vivo imaging experiments were conducted and the results demonstrated good system performance.

  9. FPGA Implementation of Metastability-Based True Random Number Generator

    NASA Astrophysics Data System (ADS)

    Hata, Hisashi; Ichikawa, Shuichi

    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.

  10. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  11. Parts & pools: a framework for modular design of synthetic gene circuits.

    PubMed

    Marchisio, Mario Andrea

    2014-01-01

    Published in 2008, Parts & Pools represents one of the first attempts to conceptualize the modular design of bacterial synthetic gene circuits with Standard Biological Parts (DNA segments) and Pools of molecules referred to as common signal carriers (e.g., RNA polymerases and ribosomes). The original framework for modeling bacterial components and designing prokaryotic circuits evolved over the last years and brought, first, to the development of an algorithm for the automatic design of Boolean gene circuits. This is a remarkable achievement since gene digital circuits have a broad range of applications that goes from biosensors for health and environment care to computational devices. More recently, Parts & Pools was enabled to give a proper formal description of eukaryotic biological circuit components. This was possible by employing a rule-based modeling approach, a technique that permits a faithful calculation of all the species and reactions involved in complex systems such as eukaryotic cells and compartments. In this way, Parts & Pools is currently suitable for the visual and modular design of synthetic gene circuits in yeast and mammalian cells too.

  12. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  13. Fine-grained parallelism accelerating for RNA secondary structure prediction with pseudoknots based on FPGA.

    PubMed

    Xia, Fei; Jin, Guoqing

    2014-06-01

    PKNOTS is a most famous benchmark program and has been widely used to predict RNA secondary structure including pseudoknots. It adopts the standard four-dimensional (4D) dynamic programming (DP) method and is the basis of many variants and improved algorithms. Unfortunately, the O(N(6)) computing requirements and complicated data dependency greatly limits the usefulness of PKNOTS package with the explosion in gene database size. In this paper, we present a fine-grained parallel PKNOTS package and prototype system for accelerating RNA folding application based on FPGA chip. We adopted a series of storage optimization strategies to resolve the "Memory Wall" problem. We aggressively exploit parallel computing strategies to improve computational efficiency. We also propose several methods that collectively reduce the storage requirements for FPGA on-chip memory. To the best of our knowledge, our design is the first FPGA implementation for accelerating 4D DP problem for RNA folding application including pseudoknots. The experimental results show a factor of more than 50x average speedup over the PKNOTS-1.08 software running on a PC platform with Intel Core2 Q9400 Quad CPU for input RNA sequences. However, the power consumption of our FPGA accelerator is only about 50% of the general-purpose micro-processors.

  14. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    SciTech Connect

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  15. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    DOE PAGES

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; ...

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined againstmore » pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.« less

  16. The P0 feedback control system blurs the line between IOC and FPGA.

    SciTech Connect

    DiMonte, N.; APS Engineering Support Division

    2008-01-01

    The P0 Feedback system is a new design at the Advanced Photon Source (APS) primarily intended to stabilize a single bunch in order to operate at a higher accumulated charge. The algorithm for this project required a high-speed DSP solution for a single channel that would make adjustments on a turn-by-turn basis. A field programmable gate array (FPGA) solution was selected that not only met the requirements of the project but far exceeded them. By using a single FPGA, we were able to adjust up to 324 bunches on two separate channels with a total computational time of {approx} 6 x 10{sup 9} multiply- accumulate operations per second. The IOC is a Coldfire CPU tightly coupled to the FPGA, providing dedicated control and monitoring of the system through EPICS [1] process variables. One of the benefits of this configuration is having a four-channel scope in the FPGA that can be monitored on a continuous basis.

  17. A novel FPGA-based bunch purity monitor system at the APS storage ring.

    SciTech Connect

    Norum, W. E.; APS Engineering Support Division

    2008-01-01

    Bunch purity is an important source quality factor for the magnetic resonance experiments at the Advanced Photon Source. Conventional bunch-purity monitors utilizing time-to-amplitude converters are subject to dead time. We present a novel design based on a single field- programmable gate array (FPGA) that continuously processes pulses at the full speed of the detector and front-end electronics. The FPGA provides 7778 single-channel analyzers (six per rf bucket). The starting time and width of each single-channel analyzer window can be set to a resolution of 178 ps. A detector pulse arriving inside the window of a single-channel analyzer is recorded in an associated 32-bit counter. The analyzer makes no contribution to the system dead time. Two channels for each rf bucket count pulses originating from the electrons in the bucket. The other four channels on the early and late side of the bucket provide estimates of the background. A single-chip microcontroller attached to the FPGA acts as an EPICS IOC to make the information in the FPGA available to the EPICS clients.

  18. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation

    PubMed Central

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  19. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation.

    PubMed

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications.

  20. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  1. Multiscale Simulation as a Framework for the Enhanced Design of Nanodiamond-Polyethylenimine-based Gene Delivery

    PubMed Central

    Kim, Hansung; Man, Han Bin; Saha, Biswajit; Kopacz, Adrian M.; Lee, One-Sun; Schatz, George C.; Ho, Dean; Liu, Wing Kam

    2012-01-01

    Nanodiamonds (NDs) are emerging carbon platforms with promise as gene/drug delivery vectors for cancer therapy. Specifically, NDs functionalized with the polymer polyethylenimine (PEI) can transfect small interfering RNAs (siRNA) in vitro with high efficiency and low cytotoxicity. Here we present a modeling framework to accurately guide the design of ND-PEI gene platforms and elucidate binding mechanisms between ND, PEI, and siRNA. This is among the first ND simulations to comprehensively account for ND size, charge distribution, surface functionalization, and graphitization. The simulation results are compared with our experimental results both for PEI loading onto NDs and for siRNA (C-myc) loading onto ND-PEI for various mixing ratios. Remarkably, the model is able to predict loading trends and saturation limits for PEI and siRNA, while confirming the essential role of ND surface functionalization in mediating ND-PEI interactions. These results demonstrate that this robust framework can be a powerful tool in ND platform development, with the capacity to realistically treat other nanoparticle systems. PMID:23304428

  2. BJT detector with FPGA-based read-out for alpha particle monitoring

    NASA Astrophysics Data System (ADS)

    Tyzhnevyi, V.; Dalla Betta, G.-F.; Rovati, L.; Verzellesi, G.; Zorzi, N.

    2011-01-01

    In this work we introduce a new prototype of readout electronics (ALPHADET), which was designed for an α-particle detection system based on a bipolar junction transistor (BJT) detector. The system uses an FPGA, which provides many advantages at the stage of prototyping and testing the detector. The main design and electrical features of the board are discussed in this paper, along with selected results from the characterization of ALPHADET coupled to BJT detectors.

  3. A backbone design principle for covalent organic frameworks: the impact of weakly interacting units on CO2 adsorption.

    PubMed

    Zhai, Lipeng; Huang, Ning; Xu, Hong; Chen, Qiuhong; Jiang, Donglin

    2017-03-31

    Covalent organic frameworks are designed to have backbones with different yet discrete contents of triarylamine units that interact weakly with CO2. Adsorption experiments indicate that the triarylamine units dominate the CO2 adsorption process and the CO2 uptake increases monotonically with the triarylamine content. These profound collective effects reveal a principle for designing backbones targeting for CO2 capture and separation.

  4. Decoding the "CoDe": A Framework for Conceptualizing and Designing Help Options in Computer-Based Second Language Listening

    ERIC Educational Resources Information Center

    Cardenas-Claros, Monica Stella; Gruba, Paul A.

    2013-01-01

    This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…

  5. Computational Design of Metal-Organic Frameworks with High Methane Deliverable Capacity

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Deem Team; Haranczyk Team; Smit Team

    Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, etc. Intensive efforts are being made to develop new MOFs with desirable properties both experimentally and computationally in the past decades. To guide experimental synthesis with limited throughput, we develop a computational methodology to explore MOFs with high methane deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and evolves a population of MOFs with desirable property efficiently. We identify about 500 MOFs with higher deliverable capacity than MOF-5 in 10 networks. We also investigate the relationship between deliverable capacity and internal surface area of MOFs. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. DE-FG02- 12ER16362.

  6. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    PubMed

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  7. Technical Guidance from the International Safety Framework for Nuclear Power Source Applications in Outer Space for Design and Development Phases

    NASA Astrophysics Data System (ADS)

    Summerer, Leopold

    2014-08-01

    In 2009, the International Safety Framework for Nuclear Power Source Applications in Outer Space [1] has been adopted, following a multi-year process that involved all major space faring nations in the frame of the International Atomic Energy Agency and the UN Committee on the Peaceful Uses of Outer Space. The safety framework reflects an international consensus on best practices. After the older 1992 Principles Relevant to the Use of Nuclear Power Sources in Outer Space, it is the second document at UN level dedicated entirely to space nuclear power sources.This paper analyses aspects of the safety framework relevant for the design and development phases of space nuclear power sources. While early publications have started analysing the legal aspects of the safety framework, its technical guidance has not yet been subject to scholarly articles. The present paper therefore focuses on the technical guidance provided in the safety framework, in an attempt to assist engineers and practitioners to benefit from these.

  8. FPGA-accelerated adaptive optics wavefront control part II

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Barth, A.; Reger, J.; Reinlein, C.; Appelfelder, M.; Beckert, E.

    2015-03-01

    We present progressive work that is based on our recently developed rapid control prototyping system (RCP), designed for the implementation of high-performance adaptive optical control algorithms using a continuous de-formable mirror (DM). The RCP system, presented in 2014, is resorting to a Xilinx Kintex-7 Field Programmable Gate Array (FPGA), placed on a self-developed PCIe card, and installed on a high-performance computer that runs a hard real-time Linux operating system. For this purpose, algorithms for the efficient evaluation of data from a Shack-Hartmann wavefront sensor (SHWFS) on an FPGA have been developed. The corresponding analog input and output cards are designed for exploiting the maximum possible performance while not being constrained to a specific DM and control algorithm due to the RCP approach. In this second part of our contribution, we focus on recent results that we achieved with this novel experimental setup. By presenting results which are far superior to the former ones, we further justify the deployment of the RCP system and its required time and resources. We conducted various experiments for revealing the effective performance, i.e. the maximum manageable complexity in the controller design that may be achieved in real-time without performance losses. A detailed analysis of the hidden latencies is carried out, showing that these latencies have been drastically reduced. In addition, a series of concepts relating the evaluation of the wavefront as well as designing and synthesizing a wavefront are thoroughly investigated with the goal to overcome some of the prevalent limitations. Furthermore, principal results regarding the closed-loop performance of the low-speed dynamics of the integrated heater in a DM concept are illustrated in detail; to be combined with the piezo-electric high-speed actuators in the next step

  9. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  10. Passive Tomography for Spent Fuel Verification: Analysis Framework and Instrument Design Study

    SciTech Connect

    White, Timothy A.; Svard, Staffan J.; Smith, Leon E.; Mozin, Vladimir V.; Jansson, Peter; Davour, Anna; Grape, Sophie; Trellue, H.; Deshmukh, Nikhil S.; Wittman, Richard S.; Honkamaa, Tapani; Vaccaro, Stefano; Ely, James

    2015-05-18

    The potential for gamma emission tomography (GET) to detect partial defects within a spent nuclear fuel assembly is being assessed through a collaboration of Support Programs to the International Atomic Energy Agency (IAEA). In the first phase of this study, two safeguards verification objectives have been identified. The first is the independent determination of the number of active pins that are present in the assembly, in the absence of a priori information. The second objective is to provide quantitative measures of pin-by-pin properties, e.g. activity of key isotopes or pin attributes such as cooling time and relative burnup, for the detection of anomalies and/or verification of operator-declared data. The efficacy of GET to meet these two verification objectives will be evaluated across a range of fuel types, burnups, and cooling times, and with a target interrogation time of less than 60 minutes. The evaluation of GET viability for safeguards applications is founded on a modelling and analysis framework applied to existing and emerging GET instrument designs. Monte Carlo models of different fuel types are used to produce simulated tomographer responses to large populations of “virtual” fuel assemblies. Instrument response data are processed by a variety of tomographic-reconstruction and image-processing methods, and scoring metrics specific to each of the verification objectives are defined and used to evaluate the performance of the methods. This paper will provide a description of the analysis framework and evaluation metrics, example performance-prediction results, and describe the design of a “universal” GET instrument intended to support the full range of verification scenarios envisioned by the IAEA.

  11. Development of Network Interface Cards for TRIDAQ systems with the NaNet framework

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Di Lorenzo, S.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Valente, P.; Vicini, P.

    2017-03-01

    NaNet is a framework for the development of FPGA-based PCI Express (PCIe) Network Interface Cards (NICs) with real-time data transport architecture that can be effectively employed in TRIDAQ systems. Key features of the architecture are the flexibility in the configuration of the number and kind of the I/O channels, the hardware offloading of the network protocol stack, the stream processing capability, and the zero-copy CPU and GPU Remote Direct Memory Access (RDMA). Three NIC designs have been developed with the NaNet framework: NaNet-1 and NaNet-10 for the CERN NA62 low level trigger and NaNet3 for the KM3NeT-IT underwater neutrino telescope DAQ system. We will focus our description on the NaNet-10 design, as it is the most complete of the three in terms of capabilities and integrated IPs of the framework.

  12. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design

  13. Formulation of a parametric systems design framework for disaster response planning

    NASA Astrophysics Data System (ADS)

    Mma, Stephanie Weiya

    The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development

  14. Wearable FPGA based wireless sensor platform.

    PubMed

    Ahola, Tom; Korpinen, Pekka; Rakkola, Juha; Rämö, Teemu; Salminen, Jukka; Savolainen, Jari

    2007-01-01

    A new wearable sensor platform has been developed. It is based on a Field Programmable Gate Array (FPGA) device. Because of this the hardware is very flexible and gives the platform unique opportunities for research of a wide range of architectures, applications and signal processing algorithms. The platform has been named NWSP, for Nokia Wrist- Attached Sensor Platform. This document describes the hardware, the firmware and applications of the platform.

  15. TOT measurement implemented in FPGA TDC

    NASA Astrophysics Data System (ADS)

    Fan, Huan-Huan; Cao, Ping; Liu, Shu-Bin; An, Qi

    2015-11-01

    Time measurement plays a crucial role for the purpose of particle identification in high energy physics experiments. With increasingly demanding physics goals and the development of electronics, modern time measurement systems need to meet the requirement of excellent resolution specification as well as high integrity. Based on Field Programmable Gate Arrays (FPGAs), FPGA time-to-digital converters (TDCs) have become one of the most mature and prominent time measurement methods in recent years. For correcting the time-walk effect caused by leading timing, a time-over-threshold (TOT) measurement should be added to the FPGA TDC. TOT can be obtained by measuring the interval between the signal leading and trailing edges. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels need to be used at the same time, one for leading, the other for trailing. However, this method unavoidably increases the amount of FPGA resources used and reduces the TDC's integrity. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measurement can be achieved using only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Testing shows that this TDC can achieve resolution better than 15ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measurement dead time is about two clock cycles, which makes it good for applications with higher physics event rates. Supported by National Natural Science Foundation of China (11079003, 10979003)

  16. Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer

    NASA Technical Reports Server (NTRS)

    Jamot, Robert F.; Monroe, Ryan M.

    2012-01-01

    With present concern for ecological sustainability ever increasing, it is desirable to model the composition of Earth s upper atmosphere accurately with regards to certain helpful and harmful chemicals, such as greenhouse gases and ozone. The microwave limb sounder (MLS) is an instrument designed to map the global day-to-day concentrations of key atmospheric constituents continuously. One important component in MLS is the spectrometer, which processes the raw data provided by the receivers into frequency-domain information that cannot only be transmitted more efficiently, but also processed directly once received. The present-generation spectrometer is fully analog. The goal is to include a fully digital spectrometer in the next-generation sensor. In a digital spectrometer, incoming analog data must be converted into a digital format, processed through a Fourier transform, and finally accumulated to reduce the impact of input noise. While the final design will be placed on an application specific integrated circuit (ASIC), the building of these chips is prohibitively expensive. To that end, this design was constructed on a field-programmable gate array (FPGA). A family of state-of-the-art digital Fourier transform spectrometers has been developed, with a combination of high bandwidth and fine resolution. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved analog-to-digital converters (ADCs). This 6-Gsps (gigasample per second) digital representation of the analog signal is then processed through an FPGA-based streaming fast Fourier transform (FFT). Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers.

  17. FPGA Trigger System to Run Klystrons

    SciTech Connect

    Gray, Darius; /Texas A-M /SLAC

    2010-08-25

    The Klystron Department is in need of a new trigger system to update the laboratory capabilities. The objective of the research is to develop the trigger system using Field Programmable Gate Array (FPGA) technology with a user interface that will allow one to communicate with the FPGA via a Universal Serial Bus (USB). This trigger system will be used for the testing of klystrons. The key materials used consists of the Xilinx Integrated Software Environment (ISE) Foundation, a Programmable Read Only Memory (Prom) XCF04S, a Xilinx Spartan 3E 35S500E FPGA, Xilinx Platform Cable USB II, a Printed Circuit Board (PCB), a 100 MHz oscillator, and an oscilloscope. Key considerations include eight triggers, two of which have variable phase shifting capabilities. Once the project was completed the output signals were able to be manipulated via a Graphical User Interface by varying the delay and width of the signal. This was as planned; however, the ability to vary the phase was not completed. Future work could consist of being able to vary the phase. This project will give the operators in the Klystron Department more flexibility to run various tests.

  18. Implementing a Digital Phasemeter in an FPGA

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2008-01-01

    Firmware for implementing a digital phasemeter within a field-programmable gate array (FPGA) has been devised. In the original application of this firmware, the phase that one seeks to measure is the difference between the phases of two nominally-equal-frequency heterodyne signals generated by two interferometers. In that application, zero-crossing detectors convert the heterodyne signals to trains of rectangular pulses, the two pulse trains are fed to a fringe counter (the major part of the phasemeter) controlled by a clock signal having a frequency greater than the heterodyne frequency, and the fringe counter computes a time-averaged estimate of the difference between the phases of the two pulse trains. The firmware also does the following: Causes the FPGA to compute the frequencies of the input signals; Causes the FPGA to implement an Ethernet (or equivalent) transmitter for readout of phase and frequency values; and Provides data for use in diagnosis of communication failures. The readout rate can be set, by programming, to a value between 250 Hz and 1 kHz. Network addresses can be programmed by the user.

  19. FPGA remote update for nuclear environments

    SciTech Connect

    Fernandes, Ana; Pereira, Rita C.; Sousa, Jorge; Carvalho, Paulo F.; Correia, Miguel; Rodrigues, Antonio P.; Carvalho, Bernardo B.; Goncalves, Bruno; Correia, Carlos M.B.A.

    2015-07-01

    The Instituto de Plasmas e Fusao Nuclear (IPFN) has developed dedicated re-configurable modules based on field programmable gate array (FPGA) devices for several nuclear fusion machines worldwide. Moreover, new Advanced Telecommunication Computing Architecture (ATCA) based modules developed by IPFN are already included in the ITER catalogue. One of the requirements for re-configurable modules operating in future nuclear environments including ITER is the remote update capability. Accordingly, this work presents an alternative method for FPGA remote programing to be implemented in new ATCA based re-configurable modules. FPGAs are volatile devices and their programming code is usually stored in dedicated flash memories for properly configuration during module power-on. The presented method is capable to store new FPGA codes in Serial Peripheral Interface (SPI) flash memories using the PCIexpress (PCIe) network established on the ATCA back-plane, linking data acquisition endpoints and the data switch blades. The method is based on the Xilinx Quick Boot application note, adapted to PCIe protocol and ATCA based modules. (authors)

  20. A Systems Engineering Framework for Design, Construction and Operation of the Next Generation Nuclear Plant

    SciTech Connect

    Edward J. Gorski; Charles V. Park; Finis H. Southworth

    2004-06-01

    Not since the International Space Station has a project of such wide participation been proposed for the United States. Ten countries, the European Union, universities, Department of Energy (DOE) laboratories, and industry will participate in the research and development, design, construction and/or operation of the fourth generation of nuclear power plants with a demonstration reactor to be built at a DOE site and operational by the middle of the next decade. This reactor will be like no other. The Next Generation Nuclear Plant (NGNP) will be passively safe, economical, highly efficient, modular, proliferation resistant, and sustainable. In addition to electrical generation, the NGNP will demonstrate efficient and cost effective generation of hydrogen to support the President’s Hydrogen Initiative. To effectively manage this multi-organizational and technologically complex project, systems engineering techniques and processes will be used extensively to ensure delivery of the final product. The technological and organizational challenges are complex. Research and development activities are required, material standards require development, hydrogen production, storage and infrastructure requirements are not well developed, and the Nuclear Regulatory Commission may further define risk-informed/performance-based approach to licensing. Detailed design and development will be challenged by the vast cultural and institutional differences across the participants. Systems engineering processes must bring the technological and organizational complexity together to ensure successful product delivery. This paper will define the framework for application of systems engineering to this $1.5B - $1.9B project.

  1. A Digitalized Silicon Microgyroscope Based on Embedded FPGA

    PubMed Central

    Xia, Dunzhu; Yu, Cheng; Wang, Yuliang

    2012-01-01

    This paper presents a novel digital miniaturization method for a prototype silicon micro-gyroscope (SMG) with the symmetrical and decoupled structure. The schematic blocks of the overall system consist of high precision analog front-end interface, high-speed 18-bit analog to digital convertor, a high-performance core Field Programmable Gate Array (FPGA) chip and other peripherals such as high-speed serial ports for transmitting data. In drive mode, the closed-loop drive circuit are implemented by automatic gain control (AGC) loop and software phase-locked loop (SPLL) based on the Coordinated Rotation Digital Computer (CORDIC) algorithm. Meanwhile, the sense demodulation module based on varying step least mean square demodulation (LMSD) are addressed in detail. All kinds of algorithms are simulated by Simulink and DSPbuilder tools, which is in good agreement with the theoretical design. The experimental results have fully demonstrated the stability and flexibility of the system. PMID:23201990

  2. FPGA implementation of Generalized Hebbian Algorithm for texture classification.

    PubMed

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs.

  3. Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping

    SciTech Connect

    Ceriani, Marco; Palermo, Gianluca; Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2013-04-29

    We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantage from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.

  4. FPGA-based RF spectrum merging and adaptive hopset selection

    NASA Astrophysics Data System (ADS)

    McLean, R. K.; Flatley, B. N.; Silvius, M. D.; Hopkinson, K. M.

    The radio frequency (RF) spectrum is a limited resource. Spectrum allotment disputes stem from this scarcity as many radio devices are confined to a fixed frequency or frequency sequence. One alternative is to incorporate cognition within a reconfigurable radio platform, therefore enabling the radio to adapt to dynamic RF spectrum environments. In this way, the radio is able to actively sense the RF spectrum, decide, and act accordingly, thereby sharing the spectrum and operating in more flexible manner. In this paper, we present a novel solution for merging many distributed RF spectrum maps into one map and for subsequently creating an adaptive hopset. We also provide an example of our system in operation, the result of which is a pseudorandom adaptive hopset. The paper then presents a novel hardware design for the frequency merger and adaptive hopset selector, both of which are written in VHDL and implemented as a custom IP core on an FPGA-based embedded system using the Xilinx Embedded Development Kit (EDK) software tool. The design of the custom IP core is optimized for area, and it can process a high-volume digital input via a low-latency circuit architecture. The complete embedded system includes the Xilinx PowerPC microprocessor, UART serial connection, and compact flash memory card IP cores, and our custom map merging/hopset selection IP core, all of which are targeted to the Virtex IV FPGA. This system is then incorporated into a cognitive radio prototype on a Rice University Wireless Open Access Research Platform (WARP) reconfigurable radio.

  5. FPGA implementation of Santos-Victor optical flow algorithm for real-time image processing: an useful attempt

    NASA Astrophysics Data System (ADS)

    Cobos Arribas, Pedro; Monasterio Huelin Macia, Felix

    2003-04-01

    A FPGA based hardware implementation of the Santos-Victor optical flow algorithm, useful in robot guidance applications, is described in this paper. The system used to do contains an ALTERA FPGA (20K100), an interface with a digital camera, three VRAM memories to contain the data input and some output memories (a VRAM and a EDO) to contain the results. The system have been used previously to develop and test other vision algorithms, such as image compression, optical flow calculation with differential and correlation methods. The designed system let connect the digital camera, or the FPGA output (results of algorithms) to a PC, throw its Firewire or USB port. The problems take place in this occasion have motivated to adopt another hardware structure for certain vision algorithms with special requirements, that need a very hard code intensive processing.

  6. A novel FPGA-programmable switch matrix interconnection element in quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Hashemi, Sara; Rahimi Azghadi, Mostafa; Zakerolhosseini, Ali; Navi, Keivan

    2015-04-01

    The Quantum-dot cellular automata (QCA) is a novel nanotechnology, promising extra low-power, extremely dense and very high-speed structure for the construction of logical circuits at a nanoscale. In this paper, initially previous works on QCA-based FPGA's routing elements are investigated, and then an efficient, symmetric and reliable QCA programmable switch matrix (PSM) interconnection element is introduced. This element has a simple structure and offers a complete routing capability. It is implemented using a bottom-up design approach that starts from a dense and high-speed 2:1 multiplexer and utilise it to build the target PSM interconnection element. In this study, simulations of the proposed circuits are carried out using QCAdesigner, a layout and simulation tool for QCA circuits. The results demonstrate high efficiency of the proposed designs in QCA-based FPGA routing.

  7. Research on acceleration method of reactor physics based on FPGA platforms

    SciTech Connect

    Li, C.; Yu, G.; Wang, K.

    2013-07-01

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)

  8. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  9. Experimental 3D Asynchronous Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2015-03-01

    microprocessor . 3.1. Asynchronous FPGA Overview In terms of the major building blocks, the asynchronous FPGA (AFPGA) architecture looks like a traditional...devices—from O(N1/2) to O(N1/3), where N is the number of devices in the system. 3D chip stacking has been proposed as a way to improve microprocessor

  10. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW

  11. Framework for identifying recommended rules and DFM scoring model to improve manufacturability of sub-20nm layout design

    NASA Astrophysics Data System (ADS)

    Pathak, Piyush; Madhavan, Sriram; Malik, Shobhit; Wang, Lynn T.; Capodieci, Luigi

    2012-03-01

    This paper addresses the framework for building critical recommended rules and a methodology for devising scoring models using simulation or silicon data. Recommended rules need to be applied to critical layout configurations (edge or polygon based geometric relations), which can cause yield issues depending on layout context and process variability. Determining of critical recommended rules is the first step for this framework. Based on process specifications and design rule calculations, recommended rules are characterized by evaluating the manufacturability response to improvements in a layout-dependent parameter. This study is applied to critical 20nm recommended rules. In order to enable the scoring of layouts, this paper also discusses a CAD framework involved in supporting use-models for improving the DFM-compliance of a physical design.

  12. Design and implementation of confocal imaging systems with a generalized theoretical framework

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-02-01

    Confocal imaging is primarily based on the use of apertures in the detection path to provide the acquired three-dimensional images with satisfactory contrast and resolution. For many years, it has become an important mode of imaging in microscopy. In biotechnology and related industries, this technique has powerful abilities of biomedical inspection and material detection with high spatial resolution, and furthermore it can combine with fluorescence microscopy to get more useful information. The objective of this paper is first to present a generalized theoretical framework for confocal imaging systems, and then efficiently to design and implement such systems with satisfactory imaging resolutions. In our approach, a theoretical review for confocal imaging is given to investigate this technique from theory to practice. Also, computer simulations are performed to analyze the imaging performance with varying optomechanical conditions. For instance, the effects of stray light on the microscopic systems are examined using the simulations. In this paper, a modified optomechanical structure for the imaging process is proposed to reduce the undesired effects. From the simulation results, it appears that the modified structure highly improves the system signal-to-noise ratio. Furthermore, the imaging resolution is improved through the investigation on the tolerance of fabrication and assembly of the optical components. In the experiments, it is found that the imaging resolution of the proposed system is less sensitive than that of common microscopes, to the position deviations arising from installations of the optical components, such as those from the pinhole and the objective lens.

  13. Design and implementation of a replay framework based on a partial order planner

    SciTech Connect

    Ihrig, L.H.; Kambhampati, S.

    1996-12-31

    In this paper we describe the design and implementation of the derivation replay framework, DFRSNLP+EBL (Derivational SNLP+EBL), which is based within a partial order planner. DERSNLP+EBL replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (EBL) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, DERSNLP+EBL normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of DERSNLP+EBL in improving planning performance on randomly-generated problems drawn from a complex domain.

  14. Immobilization of Metal-Organic Framework Nanocrystals for Advanced Design of Supported Nanocatalysts.

    PubMed

    Li, Ping; Zeng, Hua Chun

    2016-11-02

    In recent years, metal-organic frameworks (MOFs) have been employed as heterogeneous catalysts or precursors for synthesis of catalytic materials. However, conventional MOFs and their derivatives usually exhibit limited mass transfer and modest catalytic activities owing to a lengthy diffusion path and less exposed active sites. In contrast, it has been generally conceived that nanoscale MOFs are beneficial to materials utilization and mass transport, but their instability poses a serious issue to practical application. To tackle above challenges, herein we develop a novel and facile approach to the design and synthesis of nanocomposites through in situ growth and directed immobilization of nanoscale MOFs onto layered double hydroxides (LDH). The resulting supported nano-MOFs inherit advantages of pristine MOF nanocrystals and meanwhile gain enhanced stability and workability under reactive environments. A series of uniform nanometer-sized MOFs, including monometallic (ZIF-8, ZIF-67, and Cu-BTC) and bimetallic (CoZn-ZIF), can be readily synthesized onto hierarchically structured flowerlike MgAl-LDH supports with high dispersion and precision. Additionally, the resultant MgAl-LDH/MOFs can serve as a generic platform to prepare integrated nanocatalysts via controlled thermolysis. Knoevenagel condensation and reduction of 4-nitrophenol (4-NP) are used as model reactions for demonstrating the technological merits of these nanocatalysts. Therefore, this work elucidates that the synthetic immobilization of nanoscale MOFs onto conventional catalyst supports is a viable route to develop integrated nanocatalysts with high controllability over structural architecture and chemical composition.

  15. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    PubMed Central

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays. PMID:26456081

  16. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-10-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays.

  17. Active pharmaceutical ingredient (API) production involving continuous processes--a process system engineering (PSE)-assisted design framework.

    PubMed

    Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V

    2012-10-01

    A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical

  18. Three Dialogs: A Framework for the Analysis and Assessment of Twenty-First-Century Literacy Practices, and Its Use in the Context of Game Design within "Gamestar Mechanic"

    ERIC Educational Resources Information Center

    Games, Ivan Alex

    2008-01-01

    This article discusses a framework for the analysis and assessment of twenty-first-century language and literacy practices in game and design-based contexts. It presents the framework in the context of game design within "Gamestar Mechanic", an innovative game-based learning environment where children learn the Discourse of game design. It…

  19. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  20. A Binaural Neuromorphic Auditory Sensor for FPGA: A Spike Signal Processing Approach.

    PubMed

    Jimenez-Fernandez, Angel; Cerezuela-Escudero, Elena; Miro-Amarante, Lourdes; Dominguez-Moralse, Manuel Jesus; de Asis Gomez-Rodriguez, Francisco; Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel

    2017-04-01

    This paper presents a new architecture, design flow, and field-programmable gate array (FPGA) implementation analysis of a neuromorphic binaural auditory sensor, designed completely in the spike domain. Unlike digital cochleae that decompose audio signals using classical digital signal processing techniques, the model presented in this paper processes information directly encoded as spikes using pulse frequency modulation and provides a set of frequency-decomposed audio information using an address-event representation interface. In this case, a systematic approach to design led to a generic process for building, tuning, and implementing audio frequency decomposers with different features, facilitating synthesis with custom features. This allows researchers to implement their own parameterized neuromorphic auditory systems in a low-cost FPGA in order to study the audio processing and learning activity that takes place in the brain. In this paper, we present a 64-channel binaural neuromorphic auditory system implemented in a Virtex-5 FPGA using a commercial development board. The system was excited with a diverse set of audio signals in order to analyze its response and characterize its features. The neuromorphic auditory system response times and frequencies are reported. The experimental results of the proposed system implementation with 64-channel stereo are: a frequency range between 9.6 Hz and 14.6 kHz (adjustable), a maximum output event rate of 2.19 Mevents/s, a power consumption of 29.7 mW, the slices requirements of 11141, and a system clock frequency of 27 MHz.

  1. A Design Framework for Enhancing Engagement in Student-Centered Learning: Own It, Learn It, and Share It

    ERIC Educational Resources Information Center

    Lee, Eunbae; Hannafin, Michael J.

    2016-01-01

    Student-centered learning (SCL) identifies students as the owners of their learning. While SCL is increasingly discussed in K-12 and higher education, researchers and practitioners lack current and comprehensive framework to design, develop, and implement SCL. We examine the implications of theory and research-based evidence to inform those who…

  2. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    ERIC Educational Resources Information Center

    Stolk, Machiel J.; De Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-01-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The…

  3. Designing Energy Supply Chains with the P-Graph Framework under Cost Constraints andSustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  4. Product Design Network Self-contextualization: Enterprise Knowledge-Based Approach and Agent-Based Technological Framework

    NASA Astrophysics Data System (ADS)

    Levashova, Tatiana; Sandkuhl, Kurt; Shilov, Nikolay; Smirnov, Alexander; Tarasov, Vladimir

    The paper introduces self-contextualization in a service infrastructure for product design networks as novel application field for multi-agent technology. The main contributions of this paper are (1) identification of requirements from product design networks to the supporting service infrastructure, (2) the use of enterprise knowledge modelling techniques for the representation of computable context models, (3) a technological framework based on agent technology for self-contextualization based on enterprise knowledge models.

  5. Defining, Designing for, and Measuring "Social Constructivist Digital Literacy" Development in Learners: A Proposed Framework

    ERIC Educational Resources Information Center

    Reynolds, Rebecca

    2016-01-01

    This paper offers a newly conceptualized modular framework for digital literacy that defines this concept as a task-driven "social constructivist digital literacy," comprising 6 practice domains grounded in Constructionism and social constructivism: Create, Manage, Publish, Socialize, Research, Surf. The framework articulates possible…

  6. Performance evaluation on FPGA-implemented UWB-IR receiver for in-body to out-of-body communication systems.

    PubMed

    Shimizu, Yuto; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    In order to design an optimized transceiver structure of ultra wideband (UWB) transmission in in-body to out-of-body communications, it is necessary to make the transceiver structure be easily adjustable in order to realize a good communication performance in an experimental environment. For this purpose, we first implement our develop UWB-impulse radio (IR) receiver structure for the in-body to out-of-body communication in a field programmable gate array (FPGA) board, and evaluate the fundamental communication performance of the FPGA-implemented UWB-IR receiver by a biological-equivalent liquid phantom experiment. The FPGA configuration results indicate that our FPGA realization of the UWB-IR receiver has accomplished good communication performance with few FPGA slices. Moreover, the evaluation results in the liquid phantom experiment show that the FPGA-implemented UWB-IR receiver can achieve a bit error rate (BER) of 10(-3) up to a communication distance of 70 mm with ensuring a high data rate of 2 Mbps.

  7. FPGA based control system for space instrumentation

    NASA Astrophysics Data System (ADS)

    Di Giorgio, Anna M.; Cerulli Irelli, Pasquale; Nuzzolo, Francesco; Orfei, Renato; Spinoglio, Luigi; Liu, Giovanni S.; Saraceno, Paolo

    2008-07-01

    The prototype for a general purpose FPGA based control system for space instrumentation is presented, with particular attention to the instrument control application software. The system HW is based on the LEON3FT processor, which gives the flexibility to configure the chip with only the necessary HW functionalities, from simple logic up to small dedicated processors. The instrument control SW is developed in ANSI C and for time critical (<10μs) commanding sequences implements an internal instructions sequencer, triggered via an interrupt service routine based on a HW high priority interrupt.

  8. Design and construction of porous metal-organic frameworks based on flexible BPH pillars

    SciTech Connect

    Hao, Xiang-Rong; Yang, Guang-sheng; Shao, Kui-Zhan; Su, Zhong-Min; Yuan, Gang; Wang, Xin-Long

    2013-02-15

    Three metal-organic frameworks (MOFs), [Co{sub 2}(BPDC){sub 2}(4-BPH){center_dot}3DMF]{sub n} (1), [Cd{sub 2}(BPDC){sub 2}(4-BPH){sub 2}{center_dot}2DMF]{sub n} (2) and [Ni{sub 2}(BDC){sub 2}(3-BPH){sub 2} (H{sub 2}O){center_dot}4DMF]{sub n} (3) (H{sub 2}BPDC=biphenyl-4,4 Prime -dicarboxylic acid, H{sub 2}BDC=terephthalic acid, BPH=bis(pyridinylethylidene)hydrazine and DMF=N,N Prime -dimethylformamide), have been solvothermally synthesized based on the insertion of heterogeneous BPH pillars. Framework 1 has 'single-pillared' MOF-5-like motif with inner cage diameters of up to 18.6 A. Framework 2 has 'double pillared' MOF-5-like motif with cage diameters of 19.2 A while 3 has 'double pillared' 8-connected framework with channel diameters of 11.0 A. Powder X-ray diffraction (PXRD) shows that 3 is a dynamic porous framework. - Graphical abstract: By insertion of flexible BPH pillars based on 'pillaring' strategy, three metal-organic frameworks are obtained showing that the porous frameworks can be constructed in a much greater variety. Highlights: Black-Right-Pointing-Pointer Frameworks 1 and 2 have MOF-5 like motif. Black-Right-Pointing-Pointer The cube-like cages in 1 and 2 are quite large, comparable to the IRMOF-10. Black-Right-Pointing-Pointer Framework 1 is 'single-pillared' mode while 2 is 'double-pillared' mode. Black-Right-Pointing-Pointer PXRD and gas adsorption analysis show that 3 is a dynamic porous framework.

  9. A frame-based domain-specific language for rapid prototyping of FPGA-based software-defined radios

    NASA Astrophysics Data System (ADS)

    Ouedraogo, Ganda Stephane; Gautier, Matthieu; Sentieys, Olivier

    2014-12-01

    The field-programmable gate array (FPGA) technology is expected to play a key role in the development of software-defined radio (SDR) platforms. As this technology evolves, low-level designing methods for prototyping FPGA-based applications did not change throughout the decades. In the outstanding context of SDR, it is important to rapidly implement new waveforms to fulfill such a stringent flexibility paradigm. At the current time, different proposals have defined, through software-based approaches, some efficient methods to prototype SDR waveforms in a processor-based running environment. This paper describes a novel design flow for FPGA-based SDR applications. This flow relies upon high-level synthesis (HLS) principles and leverages the nascent HLS tools. Its entry point is a domain-specific language (DSL) which handles the complexity of programming an FPGA and integrates some SDR features so as to enable automatic waveform control generation from a data frame model. Two waveforms (IEEE 802.15.4 and IEEE 802.11a) have been designed and explored via this new methodology, and the results are highlighted in this paper.

  10. A Secure Content Delivery System Based on a Partially Reconfigurable FPGA

    NASA Astrophysics Data System (ADS)

    Hori, Yohei; Yokoyama, Hiroyuki; Sakane, Hirofumi; Toda, Kenji

    We developed a content delivery system using a partially reconfigurable FPGA to securely distribute digital content on the Internet. With partial reconfigurability of a Xilinx Virtex-II Pro FPGA, the system provides an innovative single-chip solution for protecting digital content. In the system, a partial circuit must be downloaded from a server to the client terminal to play content. Content will be played only when the downloaded circuit is correctly combined (=interlocked) with the circuit built in the terminal. Since each circuit has a unique I/O configuration, the downloaded circuit interlocks with the corresponding built-in circuit designed for a particular terminal. Thus, the interface of the circuit itself provides a novel authentication mechanism. This paper describes the detailed architecture of the system and clarify the feasibility and effectiveness of the system. In addition, we discuss a fail-safe mechanism and future work necessary for the practical application of the system.

  11. A Real-Time de novo DNA Sequencing Assembly Platform Based on an FPGA Implementation.

    PubMed

    Hu, Yuanqi; Georgiou, Pantelis

    2016-01-01

    This paper presents an FPGA based DNA comparison platform which can be run concurrently with the sensing phase of DNA sequencing and shortens the overall time needed for de novo DNA assembly. A hybrid overlap searching algorithm is applied which is scalable and can deal with incremental detection of new bases. To handle the incomplete data set which gradually increases during sequencing time, all-against-all comparisons are broken down into successive window-against-window comparison phases and executed using a novel dynamic suffix comparison algorithm combined with a partitioned dynamic programming method. The complete system has been designed to facilitate parallel processing in hardware, which allows real-time comparison and full scalability as well as a decrease in the number of computations required. A base pair comparison rate of 51.2 G/s is achieved when implemented on an FPGA with successful DNA comparison when using data sets from real genomes.

  12. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  13. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  14. An FPGA-based Doppler Processor for a Spaceborne Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Durden, S. L.; Fischman, M. A.; Johnson, R. A.; Chu, A. J.; Jourdan, M. N.; Tanelli, S.

    2007-01-01

    Measurement of precipitation Doppler velocity by spaceborne radar is complicated by the large velocity of the satellite platform. Even if successive pulses are well correlated, the velocity measurement may be biased if the precipitation target does not uniformly fill the radar footprint. It has been previously shown that the bias in such situations can be reduced if full spectral processing is used. The authors present a processor based on field-programmable gate array (FPGA) technology that can be used for spectral processing of data acquired by future spaceborne precipitation radars. The requirements for and design of the Doppler processor are addressed. Simulation and laboratory test results show that the processor can meet real-time constraints while easily fitting in a single FPGA.

  15. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    NASA Astrophysics Data System (ADS)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  16. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  17. Design Principles for Covalent Organic Frameworks as Efficient Electrocatalysts in Clean Energy Conversion and Green Oxidizer Production.

    PubMed

    Lin, Chun-Yu; Zhang, Lipeng; Zhao, Zhenghang; Xia, Zhenhai

    2017-02-23

    Covalent organic frameworks (COFs), an emerging class of framework materials linked by covalent bonds, hold potential for various applications such as efficient electrocatalysts, photovoltaics, and sensors. To rationally design COF-based electrocatalysts for oxygen reduction and evolution reactions in fuel cells and metal-air batteries, activity descriptors, derived from orbital energy and bonding structures, are identified with the first-principle calculations for the COFs, which correlate COF structures with their catalytic activities. The calculations also predict that alkaline-earth metal-porphyrin COFs could catalyze the direct production of H2 O2 , a green oxidizer and an energy carrier. These predictions are supported by experimental data, and the design principles derived from the descriptors provide an approach for rational design of new electrocatalysts for both clean energy conversion and green oxidizer production.

  18. A User-Centered Framework for Deriving A Conceptual Design From User Experiences: Leveraging Personas and Patterns to Create Usable Designs

    NASA Astrophysics Data System (ADS)

    Javahery, Homa; Deichman, Alexander; Seffah, Ahmed; Taleb, Mohamed

    Patterns are a design tool to capture best practices, tackling problems that occur in different contexts. A user interface (UI) design pattern spans several levels of design abstraction ranging from high-level navigation to low-level idioms detailing a screen layout. One challenge is to combine a set of patterns to create a conceptual design that reflects user experiences. In this chapter, we detail a user-centered design (UCD) framework that exploits the novel idea of using personas and patterns together. Personas are used initially to collect and model user experiences. UI patterns are selected based on personas pecifications; these patterns are then used as building blocks for constructing conceptual designs. Through the use of a case study, we illustrate how personas and patterns can act as complementary techniques in narrowing the gap between two major steps in UCD: capturing users and their experiences, and building an early design based on that information. As a result of lessons learned from the study and by refining our framework, we define a more systematic process called UX-P (User Experiences to Pattern), with a supporting tool. The process introduces intermediate analytical steps and supports designers in creating usable designs.

  19. From Human Factors to Human Actors to Human Crafters: A Meta-Design Inspired Participatory Framework for Designing in Use

    ERIC Educational Resources Information Center

    Maceli, Monica Grace

    2012-01-01

    Meta-design theory emphasizes that system designers can never anticipate all future uses of their system at design time, when systems are being developed. Rather, end users shape their environments in response to emerging needs at use time. Meta-design theory suggests that systems should therefore be designed to adapt to future conditions in the…

  20. A software engineering perspective on environmental modeling framework design: The object modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  1. Narrative Means to Preventative Ends: A Narrative Engagement Framework for Designing Prevention Interventions

    PubMed Central

    Miller-Day, Michelle; Hecht, Michael L.

    2013-01-01

    This paper describes a Narrative Engagement Framework (NEF) for guiding communication-based prevention efforts. This framework suggests that personal narratives have distinctive capabilities in prevention. The paper discusses the concept of narrative, links narrative to prevention, and discusses the central role of youth in developing narrative interventions. As illustration, the authors describe how the NEF is applied in the keepin’ it REAL adolescent drug prevention curriculum, pose theoretical directions, and offer suggestions for future work in prevention communication. PMID:23980613

  2. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  3. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel J.; de Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-05-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The framework is based on Galperin's theory of the internalisation of actions and it is operationalised into a professional development programme to empower chemistry teachers for designing new context-based units. The programme consists of the teaching of an educative context-based unit, followed by the designing of an outline of a new context-based unit. Six experienced chemistry teachers participated in the instructional meetings and practical teaching in their respective classrooms. Data were obtained from meetings, classroom discussions, and observations. The findings indicated that teachers became only partially empowered for designing a new context-based chemistry unit. Moreover, the process of professional development leading to teachers' empowerment was not carried out as intended. It is concluded that the elaboration of the framework needs improvement. The implications for a new programme are discussed.

  4. Design and development of a multi-architecture, fully implicit, charge and energy conserving particle-in-cell framework

    NASA Astrophysics Data System (ADS)

    Payne, Joshua; Knoll, Dana; McPherson, Allen; Taitano, William; Chacon, Luis; Chen, Guangye; Pakin, Scott

    2013-10-01

    As computer architectures become increasingly heterogeneous the need for algorithms and applications that can utilize these new architectures grows more pressing. CoCoPIC is a fully implicit charge and energy conserving particle-in-cell framework developed as part of the Computational Co-Design for Multi-Scale Applications in the Natural Sciences (CoCoMANS) project at Los Alamos National Laboratory. CoCoMANS is a multi-disciplinary computational co-design effort with the goal of developing new algorithms for emerging architectures using multi-scale applications. This poster will present the co-design process evolved within CoCoMANS, and details regarding the design and development of multi-architecture framework for a plasma application. This framework utilizes multiple abstraction layers in order to maximize code reuse between architectures, while providing low level abstractions to incorporate architecture specific operation optimizations such as vectorizations or hardware fused multiply-add. CoCoPIC's target problems include 1D3V slow shocks, and 2D3V magnetic island coalescence. Results of the multi-core development and optimization process will be presented.

  5. A hybrid framework for design and analysis of fault-tolerant architectures for nanoscale molecular crossbar memories.

    SciTech Connect

    Graham, P. S.; Gokhale, M.; Bhaduri, D.; Shukla, S. K.; Coker, D.; Taylor, V.

    2005-01-01

    It is anticipated that self assembled ultra-dense nanomemories will be more susceptible to manufacturing defects and transient faults than conventional CMOS-based memories, thus the need exists for fault-tolerant memory architectures. The development of such architectures will require intense analysis in terms of achievable performance measures - power dissipation, area, delay and reliability. In this paper, we propose and develop a hybrid automation framework, called HMAN, that aids the design and analysis of fault-tolerant architectures for nanomemories. Our framework can analyze memory architectures at two different levels of the design abstraction, namely the system and circuit levels. To the best of our knowledge, this is the first such attempt at analyzing memory systems at different levels of abstraction and then correlating the different performance measures to provide the system designers guidelines for designing a robust nanomemory. We also illustrate the application of our framework to self-assembled crossbar architectures by analyzing a hierarchical fault-tolerant crossbar-based memory architecture that we have developed, and comparing this with existing crossbar architectures.

  6. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine

    PubMed Central

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-01-01

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184

  7. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  8. A Theoretical Framework for Serious Game Design: Exploring Pedagogy, Play and Fidelity and Their Implications for the Design Process

    ERIC Educational Resources Information Center

    Rooney, Pauline

    2012-01-01

    It is widely acknowledged that digital games can provide an engaging, motivating and "fun" experience for students. However an entertaining game does not necessarily constitute a meaningful, valuable learning experience. For this reason, experts espouse the importance of underpinning serious games with a sound theoretical framework which…

  9. A Spartan 6 FPGA-based data acquisition system for dedicated imagers in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Loudos, G.; Georgiou, M.; David, S.; Matsopoulos, G.

    2012-12-01

    We present the development of a four-channel low-cost hardware system for data acquisition, with application in dedicated nuclear medicine imagers. A 12 bit octal channel high-speed analogue to digital converter, with up to 65 Msps sampling rate, was used for the digitization of analogue signals. The digitized data are fed into a field programmable gate array (FPGA), which contains an interface to a bank of double data rate 2 (DDR2)-type memory. The FPGA processes the digitized data and stores the results into the DDR2. An ethernet link was used for data transmission to a personal computer. The embedded system was designed using Xilinx's embedded development kit (EDK) and was based on Xilinx's Microblaze soft-core processor. The system has been evaluated using two different discrete optical detector arrays (a position-sensitive photomultiplier tube and a silicon photomultiplier) with two different pixelated scintillator arrays (BGO, LSO:Ce). The energy resolution for both detectors was approximately 25%. A clear identification of all crystal elements was achieved in all cases. The data rate of the system with this implementation can reach 60 Mbits s-1. The results have shown that this FPGA data acquisition system is a compact and flexible solution for single-photon-detection applications. This paper was originally submitted for inclusion in the special feature on Imaging Systems and Techniques 2011.

  10. Novel intelligent real-time position tracking system using FPGA and fuzzy logic.

    PubMed

    Soares dos Santos, Marco P; Ferreira, J A F

    2014-03-01

    The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller.

  11. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.

  12. Development of a Framework for Model-Based Analysis, Uncertainty Quantification, and Robust Control Design of Nonlinear Smart Composite Systems

    DTIC Science & Technology

    2015-06-04

    SMART COMPOSITE SYSTEMS Ralph Smith North Carolina State University at Raleigh Final Report 04/06/2015 DISTRIBUTION A: Distribution approved for public...Analysis, Uncertainty Quantification, and Robust Control Design of Nonlinear Smart Composite Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11...range of nonlinear and hysteretic smart composite systems. A major component of the program focused on the development of this framework in the context

  13. Applying the Universal Design for Learning Framework for Individuals With Intellectual Disability: The Future Must Be Now.

    PubMed

    Smith, Sean J; Lowrey, K Alisa

    2017-02-01

    The current research in Universal Design for Learning (UDL) for students with intellectual disability (ID) is briefly summarized and considered in light of the national goals presented by the American Association on Intellectual and Developmental Disabilities (AAIDD) in this article. Additionally, an action plan is provided for researchers and practitioners to extend knowledge on the implementation of the UDL framework inclusive of individuals with ID.

  14. Design and application of a framework for examining the beliefs and practices of physics teaching assistants

    NASA Astrophysics Data System (ADS)

    Spike, Benjamin T.; Finkelstein, Noah D.

    2016-06-01

    [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] We present a newly validated and refined framework, TA-PIVOT (TA Practices In and Views Of Teaching), for examining how physics TAs talk about and how they engage in physics teaching. This work builds upon and extends prior efforts to characterize instructors' beliefs and practices by examining both domains in parallel. We present the comprehensive framework (developed from a study of 31 total TAs) and demonstrate its utility in analyzing both interviews and classroom video observations for a sample of eight TAs. We also discuss how this framework may be used to examine variation in beliefs and practices, track the development of beliefs over time, and inform TA preparation.

  15. Design and implementation of knowledge-based framework for ground objects recognition in remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Shaobin; Ding, Mingyue; Cai, Chao; Fu, Xiaowei; Sun, Yue; Chen, Duo

    2009-10-01

    The advance of image processing makes knowledge-based automatic image interpretation much more realistic than ever. In the domain of remote sensing image processing, the introduction of knowledge enhances the confidence of recognition of typical ground objects. There are mainly two approaches to employ knowledge: the first one is scattering knowledge in concrete program and relevant knowledge of ground objects are fixed by programming; the second is systematically storing knowledge in knowledge base to offer a unified instruction for each object recognition procedure. In this paper, a knowledge-based framework for ground objects recognition in remote sensing image is proposed. This framework takes the second means for using knowledge with a hierarchical architecture. The recognition of typical airport demonstrated the feasibility of the proposed framework.

  16. Design of a pseudo-log image transform IP in an HLS-based memory management framework

    NASA Astrophysics Data System (ADS)

    Butt, Shahzad Ahmad; Mancini, Stéphane; Rousseau, Frédéric; Lavagno, Luciano

    2013-02-01

    The pseudo-log image transform is essentially a logarithmic transformation that simulates the distribution of the eye's photoreceptors and finds application in many important areas of real time image and video processing such as motion detection and estimation in robots and foveated space variant cameras. It belongs to a family of non-linear image processing kernels in which references made to memory are non-linear functions of loop indices. Non-linear kernels need some form of memory management in order to achieve the required throughput, to minimize on-chip memory and to maximize possible data re-use. In this paper we present the design of a pseudo-log image processing hardware accelerator IP, integrated with different interpolation filtering techniques, using a memory management framework. The framework can automatically generate a memory hierarchy around the IP and a data transfer controller that facilitates data exchange with main memory. The memory hierarchy reduces on-chip memory requirements, optimizes throughput and increases data-reuse. The design of the IP is fully performed at the algorithmic level in C/C++. The algorithmic description is profiled within the framework to create a customized memory hierarchy, also described at the synthesizable algorithmic level. Finally, high level synthesis is used to perform hardware design space exploration and performance estimation. Experiments show that the generated memory hierarchy is able to feed the IP with a very high bandwidth even in presence of long external memory latencies.

  17. Toward the Rational Design of Novel Noncentrosymmetric Materials: Factors Influencing the Framework Structures.

    PubMed

    Ok, Kang Min

    2016-12-20

    Solid-state materials with extended structures have revealed many interesting structure-related characteristics. Among many, materials crystallizing in noncentrosymmetric (NCS) space groups have attracted massive attention attributable to a variety of superb functional properties such as ferroelectricity, pyroelectricity, piezoelectricity, and nonlinear optical (NLO) properties. In fact, the characteristics are pivotal to many industrial applications such as laser systems, optical communications, photolithography, energy harvesting, detectors, and memories. Thus, for the past several decades, a great deal of synthetic effort has been vigorously made to realize these technologically important properties by improving the occurrence of macroscopic NCS space groups. A bright approach to increase the incidence of NCS structures was combining local asymmetric units during the initial synthesis process. Although a significant improvement has been achieved in obtaining new NCS materials using this strategy, the majority of solid-state materials still crystallize in centrosymmetric (CS) structures as the locally unsymmetrical units are easily lined up in an antiparallel manner. Therefore, discovering an effective method to control the framework structure and the macroscopic symmetry is an imminent ongoing challenge. In order to more effectively control the overall symmetry of solid-state compounds, it is critical to understand how the backbone and the subsequent centricity are affected during the crystallization. In this Account, several factors influencing the framework structure and centricity of solid-state materials are described in order to more systematically discover novel NCS materials. Recent studies on crystalline solid-state materials suggest three factors affecting the local coordination environment as well as the overall symmetry of the framework structure: (1) size variations of the various template cations, (2) a variable backbone arrangement occurring from

  18. Modular particle filtering FPGA hardware architecture for brain machine interfaces.

    PubMed

    Mountney, John; Obeid, Iyad; Silage, Dennis

    2011-01-01

    As the computational complexities of neural decoding algorithms for brain machine interfaces (BMI) increase, their implementation through sequential processors becomes prohibitive for real-time applications. This work presents the field programmable gate array (FPGA) as an alternative to sequential processors for BMIs. The reprogrammable hardware architecture of the FPGA provides a near optimal platform for performing parallel computations in real-time. The scalability and reconfigurability of the FPGA accommodates diverse sets of neural ensembles and a variety of decoding algorithms. Throughput is significantly increased by decomposing computations into independent parallel hardware modules on the FPGA. This increase in throughput is demonstrated through a parallel hardware implementation of the auxiliary particle filtering signal processing algorithm.

  19. Stego on FPGA: an IWT approach.

    PubMed

    Ramalingam, Balakrishnan; Amirtharajan, Rengarajan; Rayappan, John Bosco Balaguru

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA).

  20. Formal Learning Sequences and Progression in the Studio: A Framework for Digital Design Education

    ERIC Educational Resources Information Center

    Wärnestål, Pontus

    2016-01-01

    This paper examines how to leverage the design studio learning environment throughout long-term Digital Design education in order to support students to progress from tactical, well-defined, device-centric routine design, to confidently design sustainable solutions for strategic, complex, problems for a wide range of devices and platforms in the…

  1. The Importance of Theoretical Frameworks and Mathematical Constructs in Designing Digital Tools

    ERIC Educational Resources Information Center

    Trinter, Christine

    2016-01-01

    The increase in availability of educational technologies over the past few decades has not only led to new practice in teaching mathematics but also to new perspectives in research, methodologies, and theoretical frameworks within mathematics education. Hence, the amalgamation of theoretical and pragmatic considerations in digital tool design…

  2. National Ecosystem Services Classification System (NESCS): Framework Design and Policy Application

    EPA Science Inventory

    Understanding the ways in which ecosystems provide flows of “services” to humans is critical for decision making in many contexts; however, relationships between natural and human systems are complex. A well-defined framework for classifying ecosystem services is essential for sy...

  3. Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework

    ERIC Educational Resources Information Center

    Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi

    2012-01-01

    In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and…

  4. Design and Application of a Framework for Examining the Beliefs and Practices of Physics Teaching Assistants

    ERIC Educational Resources Information Center

    Spike, Benjamin T.; Finkelstein, Noah D.

    2016-01-01

    We present a newly validated and refined framework, TA-PIVOT (TA Practices In and Views Of Teaching), for examining how physics TAs talk about and how they engage in physics teaching. This work builds upon and extends prior efforts to characterize instructors' beliefs and practices by examining both domains in parallel. We present the…

  5. Towards a Framework for Attention Cueing in Instructional Animations: Guidelines for Research and Design

    ERIC Educational Resources Information Center

    de Koning, Bjorn B.; Tabbers, Huib K.; Rikers, Remy M. J. P.; Paas, Fred

    2009-01-01

    This paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on Mayer's theory of multimedia learning, a framework was…

  6. Leading by Design: A Collaborative and Creative Leadership Framework for Dance Integration in P-12 Schools

    ERIC Educational Resources Information Center

    Leonard, Alison E.; Hellenbrand, Leah; McShane-Hellenbrand, Karen

    2014-01-01

    This article presents the Mentorship, Integrated Curriculum, Collaboration, and Scholarship (MICCS) framework as an applicable model for transformative, creative, and curriculum-based K-12 dance education and arts integration. Developed and practiced by the authors--an artist/educator, a classroom teacher, and an arts education scholar and former…

  7. A Framework for the Design and Implementation of Service-Learning Courses

    ERIC Educational Resources Information Center

    Whitley, Meredith A.; Walsh, David S.

    2014-01-01

    Within the fields of kinesiology and physical education teacher education, there is a growing number of courses and curricula that utilize service-learning as a pedagogical strategy. However, these courses and curricula are often constructed, implemented, and evaluated without a strong framework based on literature in the field, which has led to…

  8. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  9. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  10. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  11. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  12. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  13. Cognitive Complexity and Task Sequencing: Studies in a Componential Framework for Second Language Task Design

    ERIC Educational Resources Information Center

    Robinson, Peter

    2005-01-01

    This paper describes a framework for researching the Cognition Hypothesis which claims that pedagogic tasks be sequenced for learners on the basis of increases in their cognitive complexity. It distinguishes dimensions of complexity which increase the conceptual and linguistic demands tasks make on communication, so creating the conditions for L2…

  14. Conceptual Design and Deployment of a Metadata Framework for Educational Resources on the Internet.

    ERIC Educational Resources Information Center

    Sutton, Stuart A.

    1999-01-01

    Describes the conceptual foundations for the Gateway to Educational Materials (GEM) metadata framework including adoption of the Dublin Core Element Set as its base referent, and extension of that set to meet the needs of the domain. Discusses selection and structuring of the units of description. Examines metadata generation, the association of…

  15. Pasteur's Quadrant: A Framework for Designing Reward Strategies To Enhance Public Service in Public Universities.

    ERIC Educational Resources Information Center

    Schneider, Anne L.

    1998-01-01

    Pasteur's quadrant is offered as a framework for escaping the dichotomy of "applied" vs. "basic" research. By focusing on research inspired by the community or society's needs, and which also draws on or creates appropriate theory, public universities can alter faculty's research agenda so that the total contribution of university research to the…

  16. Meta-Design as a Pedagogical Framework for Encouraging Student Agency and Democratizing the Classroom

    ERIC Educational Resources Information Center

    Hethrington, Christopher

    2015-01-01

    As diverse social and economic pressures are applied to post-secondary education, innovative approaches to pedagogical methodology are required. Given that the new norm in both industry and academia is that of constant change, a flexible and responsive approach is required along with a framework that empowers students with the skills to become…

  17. A Systematic Framework of Virtual Laboratories Using Mobile Agent and Design Pattern Technologies

    ERIC Educational Resources Information Center

    Li, Yi-Hsung; Dow, Chyi-Ren; Lin, Cheng-Min; Chen, Sheng-Chang; Hsu, Fu-Wei

    2009-01-01

    Innovations in network and information technology have transformed traditional classroom lectures into new approaches that have given universities the opportunity to create a virtual laboratory. However, there is no systematic framework in existing approaches for the development of virtual laboratories. Further, developing a virtual laboratory…

  18. Using the 4MAT Framework to Design a Problem-Based Learning Biostatistics Course

    ERIC Educational Resources Information Center

    Nowacki, Amy S.

    2011-01-01

    The study presents and applies the 4MAT theoretical framework to educational planning to transform a biostatistics course into a problem-based learning experience. Using a four-question approach, described are specific activities/materials utilized at both the class and course levels. Two web-based instruments collected data regarding student…

  19. FPGA for Power Control of MSL Avionics

    NASA Technical Reports Server (NTRS)

    Wang, Duo; Burke, Gary R.

    2011-01-01

    A PLGT FPGA (Field Programmable Gate Array) is included in the LCC (Load Control Card), GID (Guidance Interface & Drivers), TMC (Telemetry Multiplexer Card), and PFC (Pyro Firing Card) boards of the Mars Science Laboratory (MSL) spacecraft. (PLGT stands for PFC, LCC, GID, and TMC.) It provides the interface between the backside bus and the power drivers on these boards. The LCC drives power switches to switch power loads, and also relays. The GID drives the thrusters and latch valves, as well as having the star-tracker and Sun-sensor interface. The PFC drives pyros, and the TMC receives digital and analog telemetry. The FPGA is implemented both in Xilinx (Spartan 3- 400) and in Actel (RTSX72SU, ASX72S). The Xilinx Spartan 3 part is used for the breadboard, the Actel ASX part is used for the EM (Engineer Module), and the pin-compatible, radiation-hardened RTSX part is used for final EM and flight. The MSL spacecraft uses a FC (Flight Computer) to control power loads, relays, thrusters, latch valves, Sun-sensor, and star-tracker, and to read telemetry such as temperature. Commands are sent over a 1553 bus to the MREU (Multi-Mission System Architecture Platform Remote Engineering Unit). The MREU resends over a remote serial command bus c-bus to the LCC, GID TMC, and PFC. The MREU also sends out telemetry addresses via a remote serial telemetry address bus to the LCC, GID, TMC, and PFC, and the status is returned over the remote serial telemetry data bus.

  20. FPGA-accelerated adaptive optics wavefront control

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.

    2014-03-01

    The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.