Science.gov

Sample records for fpga design framework

  1. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  2. FPGA design and implementation of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Yang, Zhihui; Zhou, Gang

    2015-12-01

    In this paper , we choose four different variances of 1,3,6 and 12 to conduct FPGA design with three kinds of Gaussian filtering algorithm ,they are implementing Gaussian filter with a Gaussian filter template, Gaussian filter approximation with mean filtering and Gaussian filter approximation with IIR filtering. By waveform simulation and synthesis, we get the processing results on the experimental image and the consumption of FPGA resources of the three methods. We set the result of Gaussian filter used in matlab as standard to get the result error. By comparing the FPGA resources and the error of FPGA implementation methods, we get the best FPGA design to achieve a Gaussian filter. Conclusions can be drawn based on the results we have already got. When the variance is small, the FPGA resources is enough for the algorithm to implement Gaussian filter with a Gaussian filter template which is the best choice. But when the variance is so large that there is no more FPGA resources, we can chose the mean to approximate Gaussian filter with IIR filtering.

  3. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  4. FPGA Design Practices for I&C in Nuclear Power Plants

    SciTech Connect

    Bobrek, Miljko; Wood, Richard Thomas; Bouldin, Donald; Waterman, Michael E

    2009-01-01

    Safe FPGA design practices can be classified into three major groups covering board-level and FPGA logic-level design practices, FPGA design entry methods, and FPGA design methodology. This paper is presenting the most common hardware and software design practices that are acceptable in safety-critical FPGA systems. It also proposes an FPGA-specific design life cycle including design entry, FPGA synthesis, place and route, and validation and verification.

  5. OpenACC to FPGA: A Framework for Directive-based High-Performance Reconfigurable Computing

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2016-01-01

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, this design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.

  6. A scalable multi-FPGA framework for real-time digital signal processing

    NASA Astrophysics Data System (ADS)

    Irick, K. M.; DeBole, M.; Park, S.; Al Maashri, A.; Kestur, S.; Yu, C.-L.; Vijaykrishnan, N.

    2009-08-01

    FPGAs have emerged as the preferred platform for implementing real-time signal processing applications. In the sub-45nm technologies, FPGAs offer significant cost and design-time advantages over application-specific custom chips and consume significantly less power than general-purpose processors while maintaining, or improving performance. Moreover, FPGAs are more advantageous than GPUs in their support for control-intensive applications, custom bit-precision operations, and diverse system interface protocols. Nonetheless, a significant inhibitor to the widespread adoption of FPGAs has been the expertise required to effectively realize functional designs that maximize application performance. While there have been several academic and commercial efforts to improve the usability of FPGAs, they have primarily focused on easing the tasks of an expert FPGA designer rather than increasing the usability offered to an application developer. In this work, the design of a scalable algorithmic-level design framework for FPGAs, AlgoFLEX, is described. AlgoFLEX offers rapid algorithmic level composition and exploration while maintaining the performance realizable from a fully custom, albeit difficult and laborious, design effort. The framework masks aspects of accelerator implementation, mapping, and communication while exposing appropriate algorithm tuning facilities to developers and system integrators. The effectiveness of the AlgoFLEX framework is demonstrated by rapidly mapping a class of image and signal processing applications to a multi-FPGA platform.

  7. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    ERIC Educational Resources Information Center

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  8. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  9. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  10. REALIZATION OF A CUSTOM DESIGNED FPGA BASED EMBEDDED CONTROLLER.

    SciTech Connect

    SEVERINO,F.; HARVEY, M.; HAYES, T.; HOFF, L.; ODDO, P.; SMITH, K.S.

    2007-10-15

    As part of the Low Level RF (LLRF) upgrade project at Brookhaven National Laboratory's Collider-Accelerator Department (BNL C-AD), we have recently developed and tested a prototype high performance embedded controller. This controller is a custom designed PMC module employing a Xilinx V4FX60 FPGA with a PowerPC405 embedded processor, and a wide variety of on board peripherals (DDR2 SDRAM, FLASH, Ethernet, PCI, multi-gigabit serial transceivers, etc.). The controller is capable of running either an embedded version of LINUX or VxWorks, the standard operating system for RHIC front end computers (FECs). We have successfully demonstrated functionality of this controller as a standard RHIC FEC and tested all on board peripherals. We now have the ability to develop complex, custom digital controllers within the framework of the standard RHIC control system infrastructure. This paper will describe various aspects of this development effort, including the basic hardware, functional capabilities, the development environment, kernel and system integration, and plans for further development.

  11. Evaluation of power costs in applying TMR to FPGA designs.

    SciTech Connect

    Rollins, Nathaniel; Wirthlin, M. J.; Graham, P. S.

    2004-01-01

    Triple modular redundancy (TMR) is a technique commonly used to mitigate against design failures caused by single event upsets (SEUs). The SEU immunity that TMR provides comes at the cost of increased design area and decreased speed. Additionally, the cost of increased power due to TMR must be considered. This paper evaluates the power costs of TMR and validates the evaluations with actual measurements. Sensitivity to design placement is another important part of this study. Power consumption costs due to TMR are also evaluated in different FPGA architectures. This study shows that power consumption rises in the range of 3x to 7x when TMR is applied to a design.

  12. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  13. Discrete wavelet transform FPGA design using MatLab/Simulink

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Vera, A.; Meyer-Baese, A.; Pattichis, M.; Perry, R.

    2006-04-01

    Design of current DSP applications using state-of-the art multi-million gates devices requires a broad foundation of the engineering shlls ranging from knowledge of hardware-efficient DSP algorithms to CAD design tools. The requirement of short time-to-market, however, requires to replace the traditional HDL based designs by a MatLab/Simulink based design flow. This not only allows the over 1 million MatLab users to design FPGAs but also to by-pass the hardware design engineer leading to a significant reduction in development time. Critical however with this design flow are: (1) quality-of-results, (2) sophistication of Simulink block library, (3) compile time, (4) cost and availability of development boards, and (5) cost, functionality, and ease-of-use of the FPGA vendor provided design tools.

  14. Design of extensible meteorological data acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Liu, Yin-hua; Zhang, Hui-jun; Li, Xiao-hui

    2015-02-01

    In order to compensate the tropospheric refraction error generated in the process of satellite navigation and positioning. Temperature, humidity and air pressure had to be used in concerned models to calculate the value of this error. While FPGA XC6SLX16 was used as the core processor, the integrated silicon pressure sensor MPX4115A and digital temperature-humidity sensor SHT75 are used as the basic meteorological parameter detection devices. The core processer was used to control the real-time sampling of ADC AD7608 and to acquire the serial output data of SHT75. The data was stored in the BRAM of XC6SLX16 and used to generate standard meteorological parameters in NEMA format. The whole design was based on Altium hardware platform and ISE software platform. The system was described in the VHDL language and schematic diagram to realize the correct detection of temperature, humidity, air pressure. The 8-channel synchronous sampling characteristics of AD7608 and programmable external resources of FPGA laid the foundation for the increasing of analog or digital meteorological element signal. The designed meteorological data acquisition system featured low cost, high performance, multiple expansions.

  15. Design of polarization imaging system based on CIS and FPGA

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding

    2008-02-01

    As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.

  16. Hardware design to accelerate PNG encoder for binary mask compression on FPGA

    NASA Astrophysics Data System (ADS)

    Kachouri, Rostom; Akil, Mohamed

    2015-02-01

    PNG (Portable Network Graphics) is a lossless compression method for real-world pictures. Since its specification, it continues to attract the interest of the image processing community. Indeed, PNG is an extensible file format for portable and well-compressed storage of raster images. In addition, it supports all of Black and White (binary mask), grayscale, indexed-color, and truecolor images. Within the framework of the Demat+ project which intend to propose a complete solution for storage and retrieval of scanned documents, we address in this paper a hardware design to accelerate the PNG encoder for binary mask compression on FPGA. For this, an optimized architecture is proposed as part of an hybrid software and hardware co-operating system. For its evaluation, the new designed PNG IP has been implemented on the ALTERA Arria II GX EP2AGX125EF35" FPGA. The experimental results show a good match between the achieved compression ratio, the computational cost and the used hardware resources.

  17. A Design of Low Frequency Time-Code Receiver Based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Li, Guo-Dong; Xu, Lin-Sheng

    2006-06-01

    The hardware of a low frequency time-code receiver which was designed with FPGA (field programmable gate array) and DSP (digital signal processor) is introduced. The method of realizing the time synchronization for the receiver system is described. The software developed for DSP and FPGA is expounded, and the results of test and simulation are presented. The design is charcterized by high accuracy, good reliability, fair extensibility, etc.

  18. A co-design method for parallel image processing accelerator based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Weng, Kaijian; Cheng, Zhao; Yan, Luxin; Guan, Jing

    2011-11-01

    In this paper, we present a co-design method for parallel image processing accelerator based on DSP and FPGA. DSP is used as application and operation subsystem to execute the complex operations, and in which the algorithms are resolving into commands. FPGA is used as co-processing subsystem for regular data-parallel processing, and operation commands and image data are transmitted to FPGA for processing acceleration. A series of experiments have been carried out, and up to a half or three quarter time is saved which supports that the proposed accelerator will consume less time and get better performance than the traditional systems.

  19. Single Event Analysis and Fault Injection Techniques Targeting Complex Designs Implemented in Xilinx-Virtex Family Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth; Kim, Kim

    2014-01-01

    An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.

  20. FPGA-Based Efficient Hardware/Software Co-Design for Industrial Systems with Consideration of Output Selection

    NASA Astrophysics Data System (ADS)

    Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.

    2016-05-01

    This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.

  1. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  2. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  3. An FPGA hardware/software co-design towards evolvable spiking neural networks for robotics application.

    PubMed

    Johnston, S P; Prasad, G; Maguire, L; McGinnity, T M

    2010-12-01

    This paper presents an approach that permits the effective hardware realization of a novel Evolvable Spiking Neural Network (ESNN) paradigm on Field Programmable Gate Arrays (FPGAs). The ESNN possesses a hybrid learning algorithm that consists of a Spike Timing Dependent Plasticity (STDP) mechanism fused with a Genetic Algorithm (GA). The design and implementation direction utilizes the latest advancements in FPGA technology to provide a partitioned hardware/software co-design solution. The approach achieves the maximum FPGA flexibility obtainable for the ESNN paradigm. The algorithm was applied as an embedded intelligent system robotic controller to solve an autonomous navigation and obstacle avoidance problem. PMID:21117269

  4. The Reliability of FPGA circuit designs in the presence of radiation induced configuration upsets

    SciTech Connect

    Wirthlin, M. J.; Johnson, E.; Rollins, N.; Caffrey, M. P.; Graham, P. S.

    2003-01-01

    FPGAs are an appealing solution for space-based remote sensing applications. However, an a low-earth orbit, FPGAs are susceptible t o Single-Event Upsets (SEUs). In an effort to understand the effects of SEUs, an SEU simulator based on the SLAAC-1V computing board has been developed. This simulator artifically upsets the conjiguration memory of an FPGA and measures its impact on FPGA designs. The accuracy of this simulation environment has been verified using ground-based radiation testing. This sim{approx}ulataon tool is being used to characterize the reliabilitg of SEU mitigation techniques for PI'GAs.

  5. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications

    PubMed Central

    Sun, Li; Savory, Joshua J.; Warncke, Kurt

    2014-01-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range. PMID:25076864

  6. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications.

    PubMed

    Sun, Li; Savory, Joshua J; Warncke, Kurt

    2013-08-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range. PMID:25076864

  7. DESIGN AND ANALYSIS OF AN FPGA-BASED ACTIVE FEEDBACK DAMPING SYSTEM

    SciTech Connect

    Xie, Zaipeng; Schulte, Mike; Deibele, Craig Edmond

    2010-01-01

    The Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory is a high-intensity proton-based accelerator that produces neutron beams for neutronscattering research. As the most powerful pulsed neutron source in the world, the SNS accelerator has experienced an unprecedented beam instability that has a wide bandwidth (0 to 300MHz) and fast growth time (10 to100 s). In this paper, we propose and analyze several FPGA-based designs for an active feedback damping system. This signal processing system is the first FPGA-based design for active feedback damping of wideband instabilities in high intensity accelerators. It can effectively mitigate instabilities in highintensity protons beams, reduce radiation, and boost the accelerator s luminosity performance. Unlike existing systems, which are designed using analog components, our FPGA-based active feedback damping system offers programmability while maintaining high performance. To meet the system throughput and latency requirements, our proposed designs are guided by detailed analysis of resource and performance tradeoffs. These designs are mapped onto a reconfigurable platform that includes Xilinx Virtex-II Pro FPGAs and high-speed analog-to-digital and digital-toanalog converters. Our results show that our FPGA-based active feedback damping system can provide increased flexibility and improved signal processing performance that are not feasible with existing analog systems.

  8. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S; Caffrey, Michael P

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.

  9. Evaluation of a segmentation algorithm designed for an FPGA implementation

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Schönermark, Maria; Huber, Felix

    2013-10-01

    The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.

  10. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  11. Design and simulation of an FPGA-based printed wiring assembly

    SciTech Connect

    Eilers, D.L.

    1993-12-31

    Past generations of electronic products have been constructed using relatively few (often just one) field programmable gate arrays (FGPA) or Application Specific Integrated Circuits (ASIC) surrounded by a collection of medium to large scale integration parts. Today, the new generations of electronic products are becoming increasingly complex. The specification, design, and simulation of this new generation of FPGA and ASIC based products places additional demands on computer-aided engineering (CAE) systems. FPGA and ASIC devices offer both high pin count and high internal logic density. Both of these features serve to increase the density and functionality of the products in which they are used; however, these features also detract from the ability to debug the final hardware with conventional techniques. Fine pitch parts with high pin counts present a great challenge to probing. The simulations done on individual designs address many of these concerns; however, when FPGA`s and/or ASIC`s make up a significant portion of the electronics assembly or when the interfaces between them are complicated, product level simulation becomes very important. This paper will describe the electronic product realization process that has evolved in Department 2335 at Sandia National Laboratories. Department 2335 is a hardware development group which works to support various system development departments. The customers for these electronics products are a group of system design and integration engineers who architect and implement the final system. The following phases of the design process are described in terms of an FPGA based product design; however, they are generally applicable to all types of electronic designs. This paper contains the bulk of the details of the design process which was utilized to develop the latest generation of electronic products.

  12. Integration design of FPGA software for a miniaturizing CCD remote sensing camera

    NASA Astrophysics Data System (ADS)

    Yin, Na; Li, Qiang; Rong, Peng; Lei, Ning; Wan, Min

    2014-09-01

    Video signal processor (VSP) is an important part for CCD remote sensing cameras, and also is the key part of light miniaturization design for cameras. We need to apply FPGAs to improve the level of integration for simplifying the video signal processor circuit. This paper introduces an integration design of FPGA software for video signal processor in a certain space remote sensing camera in detail. This design has accomplished the functions of integration in CCD timing control, integral time control, CCD data formatting and CCD image processing and correction on one single FPGA chip, which resolved the problem for miniaturization of video signal processor in remote sensing cameras. Currently, this camera has already launched successfully and obtained high quality remote sensing images, which made contribution to the miniaturizing remote sensing camera.

  13. Effectiveness of Internal vs. External SEU Scrubbing Mitigation Strategies in a Xilinx FPGA: Design, Test, and Analysis

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Poivey C.; Petrick, D.; Espinosa, D.; Lesea, Austin; LaBel, K. A.; Friendlich, M; Kim, H; Phan, A.

    2008-01-01

    We compare two scrubbing mitigation schemes for Xilinx FPGA devices. The design of the scrubbers is briefly discussed along with an examination of mitigation limitations. Proton and Heavy Ion data are then presented and analyzed.

  14. General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice

    NASA Astrophysics Data System (ADS)

    Wasfy, Wael; Zheng, Hong

    Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.

  15. [Design of an FPGA-based image guided surgery hardware platform].

    PubMed

    Zou, Fa-Dong; Qin, Bin-Jie

    2008-07-01

    An FPGA-Based Image Guided Surgery Hardware Platform has been designed and implemented in this paper. The hardware platform can provide hardware acceleration for image guided surgery. It is completed with a video decoder interface, a DDR memory controller, a 12C bus controller, an interrupt controller and so on. It is able to perform real time video endoscopy image capturing in the surgery and to preserve the hardware interface for image guided surgery algorithm module. PMID:18973036

  16. FPGA-based design of FFT processor and optimization of window-adding

    NASA Astrophysics Data System (ADS)

    Kai, Pan; Song, Jie; Zhong, Qing

    2015-12-01

    A method of implement FFT based on FPGA IP Core is introduced in this paper. In addition, for the spectrum leakage caused by the truncation of the non-integer-period sampling, an improved method of adding window to the input signal to restrain the spectrum leakage is proposed. The design was simulated in the Matlab environment. The results show that the proposed method has good performance with some improvement.

  17. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  18. Design of miniature hybrid target recognition system with combination of FPGA+DSP

    NASA Astrophysics Data System (ADS)

    Luo, Shishang; Li, Xiujian; Jia, Hui; Hu, Wenhua; Nie, Yongming; Chang, Shengli

    2010-10-01

    With advantages of flexibility, high bandwidth, high spatial resolution and high-speed parallel operation, the opto-electronic hybrid target recognition system can be applied in many civil and military areas, such as video surveillance, intelligent navigation and robot vision. A miniature opto-electronic hybrid target recognition system based on FPGA+DSP is designed, which only employs single Fourier lens and with a focal length. With the precise timing control of the FPGA and images pretreatment of the DSP, the system performs both Fourier transform and inverse Fourier transform with all optical process, which can improve recognition speed and reduce the system volume remarkably. We analyzed the system performance, and a method to achieve scale invariant pattern recognition was proposed on the basis of lots of experiments.

  19. Fault Tolerance Implementation within SRAM Based FPGA Designs based upon Single Event Upset Occurrence Rates

    NASA Technical Reports Server (NTRS)

    Berg, Melanie

    2006-01-01

    Emerging technology is enabling the design community to consistently expand the amount of functionality that can be implemented within Integrated Circuits (ICs). As the number of gates placed within an FPGA increases, the complexity of the design can grow exponentially. Consequently, the ability to create reliable circuits has become an incredibly difficult task. In order to ease the complexity of design completion, the commercial design community has developed a very rigid (but effective) design methodology based on synchronous circuit techniques. In order to create faster, smaller and lower power circuits, transistor geometries and core voltages have decreased. In environments that contain ionizing energy, such a combination will increase the probability of Single Event Upsets (SEUs) and will consequently affect the state space of a circuit. In order to combat the effects of radiation, the aerospace community has developed several "Hardened by Design" (fault tolerant) design schemes. This paper will address design mitigation schemes targeted for SRAM Based FPGA CMOS devices. Because some mitigation schemes may be over zealous (too much power, area, complexity, etc.. . .), the designer should be conscious that system requirements can ease the amount of mitigation necessary for acceptable operation. Therefore, various degrees of Fault Tolerance will be demonstrated along with an analysis of its effectiveness.

  20. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  1. Asynchronous FPGA risks

    NASA Technical Reports Server (NTRS)

    Erickson, K.

    2000-01-01

    The worst case timing margin of a synchronous design implemented with a field-programmable gate array (FPGA) is easy to perform using available FPGA design tools. However, it may be difficult to impossible to verify that worst case timing requirements are met for complex asynchronous logic design.

  2. FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations

    SciTech Connect

    Harold D. Anderson, John T. Williams

    2009-07-01

    -of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.

  3. Fpga based L-band pulse doppler radar design and implementation

    NASA Astrophysics Data System (ADS)

    Savci, Kubilay

    As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed

  4. FPGA-based data processing module design of on-board radiometric calibration in visible/near infrared bands

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Li, Chenyang; Yue, Tao; Liu, Na; Jiang, Linjun; Sun, Yue; Li, Mingyan

    2015-12-01

    FPGA technology has long been applied to on-board radiometric calibration data processing however the integration of FPGA program is not good enough. For example, some sensors compressed remote sensing images and transferred to ground station to calculate the calibration coefficients. It will affect the timeliness of on-board radiometric calibration. This paper designs an integrated flow chart of on-board radiometric calibration. Building FPGA-based radiometric calibration data processing modules uses system generator. Thesis focuses on analyzing the calculation accuracy of FPGA-based two-point method and verifies the feasibility of this method. Calibration data was acquired by hardware platform which was built using integrating sphere, CMOS camera (canon 60d), ASD spectrometers and light filter (center wavelength: 690nm, bandwidth: 45nm). The platform can simulate single-band on-board radiometric calibration data acquisition in visible/near infrared band. Making an experiment of calibration coefficients calculation uses obtained data and FPGA modules. Experimental results show that: the camera linearity is above 99% meeting the experimental requirement. Compares with MATLAB the calculation accuracy of two-point method by FPGA are as follows: the error of gain value is 0.0053%; the error of offset value is 0.00038719%. Those results meet experimental accuracy requirement.

  5. Statechart-based design controllers for FPGA partial reconfiguration

    NASA Astrophysics Data System (ADS)

    Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo

    2015-09-01

    Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.

  6. Application in DSP/FPGA design of Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Liu, Yong-mei; Guan, Yong; Zhang, Jie; Wu, Min-hua; Wu, Lin-wei

    2012-12-01

    As an off-line simulation tool, the modular modelling method of Matlab/Simulik has the features of high efficiency and visualization. In order to realize the fast design and the simulation of prototype systems, the new method of SignalWAVe/Simulink mix modelling is presented, and the Reed-Solomon codec encoder-decoder model is built. Reed-Solomon codec encoder-decoder model is simulated by Simulink. Farther, the C language program and model the. out executable file are created by SignalWAVe RTW Options module, which completes the hard ware co-simulation. The simulation result conforms to the theoretical analysis, thus it has proven the validity and the feasibility of this method.

  7. Reliability concerns with logical constants in Xilinx FPGA designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul; Morgan, Keith; Ostler, Patrick; Allen, Greg; Swift, Gary; Tseng, Chen W

    2009-01-01

    In Xilinx Field Programmable Gate Arrays logical constants, which ground unused inputs and provide constants for designs, are implemented in SEU-susceptible logic. In the past, these logical constants have been shown to cause the user circuit to output bad data and were not resetable through off-line rcconfiguration. In the more recent devices, logical constants are less problematic, though mitigation should still be considered for high reliability applications. In conclusion, we have presented a number of reliability concerns with logical constants in the Xilinx Virtex family. There are two main categories of logical constants: implicit and explicit logical constants. In all of the Virtex devices, the implicit logical constants are implemented using half latches, which in the most recent devices are several orders of magnitudes smaller than configuration bit cells. Explicit logical constants are implemented exclusively using constant LUTs in the Virtex-I and Virtex-II, and use a combination of constant LUTs and architectural posts to the ground plane in the Virtex-4. We have also presented mitigation methods and options for these devices. While SEUs in implicit and some types of explicit logical constants can cause data corrupt, the chance of failure from these components is now much smaller than it was in the Virtex-I device. Therefore, for many cases, mitigation might not be necessary, except under extremely high reliability situations.

  8. FPGA Coprocessor Design for an Onboard Multi-Angle Spectro-Polarimetric Imager

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.

    2010-01-01

    A multi-angle spectro-polarimetric imager (MSPI) is an advanced camera system currently under development at JPL for possible future consideration on a satellite-based Aerosol-Cloud-Environ - ment (ACE) interaction study. The light in the optical system is subjected to a complex modulation designed to make the overall system robust against many instrumental artifacts that have plagued such measurements in the past. This scheme involves two photoelastic modulators that are beating in a carefully selected pattern against each other. In order to properly sample this modulation pattern, each of the proposed nine cameras in the system needs to read out its imager array about 1,000 times per second. The onboard processing required to compress this data involves least-squares fits (LSFs) of Bessel functions to data from every pixel in realtime, thus requiring an onboard computing system with advanced data processing capabilities in excess of those commonly available for space flight. As a potential solution to meet the MSPI onboard processing requirements, an LSF algorithm was developed on the Xilinx Virtex-4FX60 field programmable gate array (FPGA). In addition to configurable hardware capability, this FPGA includes Power -PC405 microprocessors, which together enable a combination hardware/ software processing system. A laboratory demonstration was carried out based on a hardware/ software co-designed processing architecture that includes hardware-based data collection and least-squares fitting (computationally), and softwarebased transcendental function computation (algorithmically complex) on the FPGA. Initial results showed that these calculations can be handled using a combination of the Virtex- 4TM Power-PC core and the hardware fabric.

  9. Design of a system based on DSP and FPGA for video recording and replaying

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  10. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Anjun; Wang, Jing; Lu, Jian-Yu

    2002-05-01

    Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  11. FPGA and DSP based an intelligent visual sensor design for laser welding seam recognition

    NASA Astrophysics Data System (ADS)

    Jiang, Fang; Jiang, Chunying; Ge, Hanlin

    2010-10-01

    A kind of intelligent visual sensor system, which is mainly composed of a laser lighting source and an intelligent camera based on FPGA and DSP, is designed using the triangle structured light principle for laser welding seam recognition. Intending to meet the high-precision demands, on one hand, the components of the sensor such as the lighting source, the optical lens, the optic sensor of the camera are selected and the errors caused by these parts are analyzed. Furthermore, the triangle structured light principle models are built and the simulation is given to optimize the light path parameters. On the other hand, the methods of the image processing for seam recognition using intelligent camera based on FPGA and DSP are explained. Experiments results proved that the sensor design is reasonable and the image recognition methods are rational and effective. Very compact structure, high-precision and high-real-time performance are integrated in this sensor, so it can be applied well in the seam tracking and inspection system in the laser welding and arc welding fields.

  12. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  13. Design and Verification of an FPGA-based Bit Error Rate Tester

    NASA Astrophysics Data System (ADS)

    Xiang, Annie; Gong, Datao; Hou, Suen; Liu, Chonghan; Liang, Futian; Liu, Tiankuan; Su, Da-Shung; Teng, Ping-Kun; Ye, Jingbo

    Bit error rate (BER) is the principle measure of performance of a data transmission link. With the integration of high-speed transceivers inside a field programmable gate array (FPGA), the BER testing can now be handled by transceiver-enabled FPGA hardware. This provides a cheaper alternative to dedicated table-top equipment and offers the flexibility of test customization and data analysis. This paper presents a BER tester implementation based on the Altera Stratix II GX and IV GT development boards. The architecture of the tester is described. Lab test results and field test data analysis are discussed. The Stratix II GX tester operates at up to 5 Gbps and the Stratix IV GT tester operates at up to 10 Gbps, both in 4 duplex channels. The tester deploys a pseudo random bit sequence (PRBS) generator and detector, a transceiver controller, and an error logger. It also includes a computer interface for data acquisition and user configuration. The tester's functionality was validated and its performance characterized in a point-to-point serial optical link setup. BER vs. optical receiver sensitivity was measured to emulate stressed link conditions. The Stratix II GX tester was also used in a proton test on a custom designed serializer chip to record and analyse radiation-induced errors.

  14. Explicit Design of FPGA-Based Coprocessors for Short-Range Force Computations in Molecular Dynamics Simulations *†

    PubMed Central

    Gu, Yongfeng; VanCourt, Tom; Herbordt, Martin C.

    2008-01-01

    FPGA-based acceleration of molecular dynamics simulations (MD) has been the subject of several recent studies. The short-range force computation, which dominates the execution time, is the primary focus. Here we combine: a high level of FPGA-specific design including cell lists, systematically determined interpolation and precision, handling of exclusion, and support for MD simulations of up to 256K particles. The target system consists of a standard PC with a 2004-era COTS FPGA board. There are several innovations: new microarchitectures for several major components, including the cell list processor and the off-chip memory controller; and a novel arithmetic mode. Extensive experimentation was required to optimize precision, interpolation order, interpolation mode, table sizes, and simulation quality. We obtain a substantial speed-up over a highly tuned production MD code. PMID:19412319

  15. Dynamic high-speed acquisition system design of transmission error with USB based on LabVIEW and FPGA

    NASA Astrophysics Data System (ADS)

    Zheng, Yong; Chen, Yan

    2013-10-01

    To realize the design of dynamic acquisition system for real-time detection of transmission chain error is very important to improve the machining accuracy of machine tool. In this paper, the USB controller and FPGA is used for hardware platform design, combined with LabVIEW to design user applications, NI-VISA is taken for develop USB drivers, and ultimately achieve the dynamic acquisition system design of transmission error

  16. Design Activity Framework for Visualization Design.

    PubMed

    McKenna, Sean; Mazur, Dominika; Agutter, James; Meyer, Miriah

    2014-12-01

    An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research. PMID:26356933

  17. Design and realization of data acquisition system of FTS based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Haiying; Li, Yue

    2014-11-01

    Earth observation is an important field of infrared remote sensing. Hyper-spectral remote sensing play an important role in weather forecast, environmental protection, agricultural production and geological survey. Now, Fourier-transform spectrometer (FTS) based on theory of Michelson interferometer has successfully been used to view the earth as a satellite-based instrument. The technology of FTS is an important research direction. This paper state the application of the FTS and give the analysis and research on interference signal sample and acquisition, in addition, it give a solution in which FPGA is used to complete the parallel capture of signal. In a conclusion, this design can accomplish the multi-channel and high-speed interferometer signal acquisition and transmission which is a base for further spectrum inversion and application.

  18. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  19. Design and implementation of low power clock gated 64-bit ALU on ultra scale FPGA

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep

    2016-03-01

    64-bit energy efficient Arithmetic and Logic Unit using negative latch based clock gating technique is designed in this paper. The 64-bit ALU is designed using multiplexer based full adder cell. We have designed a 64-bit ALU with a gated clock. We have used negative latch based circuit for generating gated clock. This gated clock is used to control the multiplexer based 64-bit ALU. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. We have achieved 74.07%, 92. 93% and 95.53% reduction in total clock power, 89.73%, 91.35% and 92.85% reduction in I/Os power, 67.14%, 62.84% and 74.34% reduction in dynamic power and 25.47%, 29.05% and 46.13% reduction in total supply power at 20 MHz, 200 MHz and 2 GHz frequency respectively. The power has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.3.

  20. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  1. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  2. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  3. Design and realization of the real-time spectrograph controller for LAMOST based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Wu, Liyan; Zeng, Yizhong; Dai, Songxin; Hu, Zhongwen; Zhu, Yongtian; Wang, Lei; Wu, Zhen; Chen, Yi

    2008-08-01

    A large Schmitt reflector telescope, Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST), is being built in China, which has effective aperture of 4 meters and can observe the spectra of as many as 4000 objects simultaneously. To fit such a large amount of observational objects, the dispersion part is composed of a set of 16 multipurpose fiber-fed double-beam Schmidt spectrographs, of which each has about ten of moveable components realtimely accommodated and manipulated by a controller. An industrial Ethernet network connects those 16 spectrograph controllers. The light from stars is fed to the entrance slits of the spectrographs with optical fibers. In this paper, we mainly introduce the design and realization of our real-time controller for the spectrograph, our design using the technique of System On Programmable Chip (SOPC) based on Field Programmable Gate Array (FPGA) and then realizing the control of the spectrographs through NIOSII Soft Core Embedded Processor. We seal the stepper motor controller as intellectual property (IP) cores and reuse it, greatly simplifying the design process and then shortening the development time. Under the embedded operating system μC/OS-II, a multi-tasks control program has been well written to realize the real-time control of the moveable parts of the spectrographs. At present, a number of such controllers have been applied in the spectrograph of LAMOST.

  4. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  5. GBT link testing and performance measurement on PCIe40 and AMC40 custom design FPGA boards

    NASA Astrophysics Data System (ADS)

    Mitra, Jubin; Khan, Shuaib A.; Barros Marin, Manoel; Cachemiche, Jean-Pierre; David, Erno; Hachon, Frédéric; Rethore, Frédéric; Kiss, Tivadar; Baron, Sophie; Kluge, Alex; Nayak, Tapan K.

    2016-03-01

    The high-energy physics experiments at the CERN's Large Hadron Collider (LHC) are preparing for Run3, which is foreseen to start in the year 2021. Data from the high radiation environment of the detector front-end electronics are transported to the data processing units, located in low radiation zones through GBT (Gigabit transceiver) links. The present work discusses the GBT link performance study carried out on custom FPGA boards, clock calibration logic and its implementation in new Arria 10 FPGA.

  6. Design and Implementation of High Frequency Ultrasound Pulsed-Wave Doppler Using FPGA

    PubMed Central

    Hu, Chang-hong; Zhou, Qifa; Shung, K. Kirk

    2009-01-01

    The development of a field-programmable gate array (FPGA)-based pulsed-wave Doppler processing approach in pure digital domain is reported in this paper. After the ultrasound signals are digitized, directional Doppler frequency shifts are obtained with a digital-down converter followed by a low-pass filter. A Doppler spectrum is then calculated using the complex fast Fourier transform core inside the FPGA. In this approach, a pulsed-wave Doppler implementation core with reconfigurable and real-time processing capability is achieved. PMID:18986909

  7. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  8. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  9. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  10. ELPSA as a Lesson Design Framework

    ERIC Educational Resources Information Center

    Lowrie, Tom; Patahuddin, Sitti Maesuri

    2015-01-01

    This paper offers a framework for a mathematics lesson design that is consistent with the way we learn about, and discover, most things in life. In addition, the framework provides a structure for identifying how mathematical concepts and understanding are acquired and developed. This framework is called ELPSA and represents five learning…

  11. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  12. Architectural design for a low cost FPGA-based traffic signal detection system in vehicles

    NASA Astrophysics Data System (ADS)

    López, Ignacio; Salvador, Rubén; Alarcón, Jaime; Moreno, Félix

    2007-05-01

    In this paper we propose an architecture for an embedded traffic signal detection system. Development of Advanced Driver Assistance Systems (ADAS) is one of the major trends of research in automotion nowadays. Examples of past and ongoing projects in the field are CHAMELEON ("Pre-Crash Application all around the vehicle" IST 1999-10108), PREVENT (Preventive and Active Safety Applications, FP6-507075, http://www.prevent-ip.org/) and AVRT in the US (Advanced Vision-Radar Threat Detection (AVRT): A Pre-Crash Detection and Active Safety System). It can be observed a major interest in systems for real-time analysis of complex driving scenarios, evaluating risk and anticipating collisions. The system will use a low cost CCD camera on the dashboard facing the road. The images will be processed by an Altera Cyclone family FPGA. The board does median and Sobel filtering of the incoming frames at PAL rate, and analyzes them for several categories of signals. The result is conveyed to the driver. The scarce resources provided by the hardware require an architecture developed for optimal use. The system will use a combination of neural networks and an adapted blackboard architecture. Several neural networks will be used in sequence for image analysis, by reconfiguring a single, generic hardware neural network in the FPGA. This generic network is optimized for speed, in order to admit several executions within the frame rate. The sequence will follow the execution cycle of the blackboard architecture. The global, blackboard architecture being developed and the hardware architecture for the generic, reconfigurable FPGA perceptron will be explained in this paper. The project is still at an early stage. However, some hardware implementation results are already available and will be offered in the paper.

  13. FPGA design for dual-spectrum Visual Scene Preparation in retinal prosthesis.

    PubMed

    Al Yaman, Musa; Al-Atabany, Walid; Bystrov, Alex; Degenaar, Patrick

    2014-01-01

    A method of Visual Scene Preparation for the patients suffering Retinitis Pigmentosa is implemented in hardware for the first time. The scene is captured with two cameras, one visible spectrum and one infra-red, in order to distinguish between the live and non-live objects. The live objects are subsequently emphasized in the output image, thus helping a patient to see the most significant detail with the healthy part of the retina. The implementation uses Verilog language and FPGA platform. A system prototype is analyzed and compared to MATLAB results. PMID:25571039

  14. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  15. Design of an Oximeter Based on LED-LED Configuration and FPGA Technology

    PubMed Central

    Stojanovic, Radovan; Karadaglic, Dejan

    2013-01-01

    A fully digital photoplethysmographic (PPG) sensor and actuator has been developed. The sensing circuit uses one Light Emitting Diode (LED) for emitting light into human tissue and one LED for detecting the reflectance light from human tissue. A Field Programmable Gate Array (FPGA) is used to control the LEDs and determine the PPG and Blood Oxygen Saturation (SpO2). The configurations with two LEDs and four LEDs are developed for measuring PPG signal and Blood Oxygen Saturation (SpO2). N-LEDs configuration is proposed for multichannel SpO2 measurements. The approach resulted in better spectral sensitivity, increased and adjustable resolution, reduced noise, small size, low cost and low power consumption. PMID:23291575

  16. Design of a hardware/software FPGA-based driver system for a large area high resolution CCD image sensor

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Xu, Wanpeng; Zhao, Rongsheng; Chen, Xiangning

    2014-09-01

    A hardware/software field programmable gate array (FPGA)-based driver system was proposed and demonstrated for the KAF-39000 large area high resolution charge coupled device (CCD). The requirements of the KAF-39000 driver system were analyzed. The structure of "microprocessor with application specific integrated circuit (ASIC) chips" was implemented to design the driver system. The system test results showed that dual channels of imaging analog data were obtained with a frame rate of 0.87 frame/s. The frequencies of horizontal timing and vertical timing were 22.9 MHz and 28.7 kHz, respectively, which almost reached the theoretical value of 24 MHz and 30 kHz, respectively.

  17. FPGA design and implementation of a fast pixel purity index algorithm for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.

    2005-11-01

    Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.

  18. Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning.

    PubMed

    Dewitt, Don; Johnson-Williams, Nathan G; Miyaoka, Robert S; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K; Hauck, Scott

    2010-02-01

    We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 10(6) floating-point operations per event for an exhaustive search to < 5 × 10(3) integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135

  19. Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning

    PubMed Central

    DeWitt, Don; Johnson-Williams, Nathan G.; Miyaoka, Robert S.; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K.; Hauck, Scott

    2010-01-01

    We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 106 floating-point operations per event for an exhaustive search to < 5 × 103 integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135

  20. The FPGA Pixel Array Detector

    NASA Astrophysics Data System (ADS)

    Hromalik, Marianne S.; Green, Katherine S.; Philipp, Hugh T.; Tate, Mark W.; Gruner, Sol M.

    2013-02-01

    A proposed design for a reconfigurable x-ray Pixel Array Detector (PAD) is described. It operates by integrating a high-end commercial field programmable gate array (FPGA) into a 3-layer device along with a high-resistivity diode detection layer and a custom, application-specific integrated circuit (ASIC) layer. The ASIC layer contains an energy-discriminating photon-counting front end with photon hits streamed directly to the FPGA via a massively parallel, high-speed data connection. FPGA resources can be allocated to perform user defined tasks on the pixel data streams, including the implementation of a direct time autocorrelation function (ACF) with time resolution down to 100 ns. Using the FPGA at the front end to calculate the ACF reduces the required data transfer rate by several orders of magnitude when compared to a fast framing detector. The FPGA-ASIC high-speed interface, as well as the in-FPGA implementation of a real-time ACF for x-ray photon correlation spectroscopy experiments has been designed and simulated. A 16×16 pixel prototype of the ASIC has been fabricated and is being tested.

  1. FPGA Vision Data Architecture

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  2. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  3. Structural Analysis in a Conceptual Design Framework

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.

    2012-01-01

    Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.

  4. Validation of an FPGA fault simulator.

    SciTech Connect

    Wirthlin, M. J.; Johnson, D. E.; Graham, P. S.; Caffrey, M. P.

    2003-01-01

    This work describes the radiation testing of a fault simulation tool used to study the behavior of FPGA circuits in the presence of configuration memory upsets . There is increasing interest in the use of Field Programmable Gate Arrays (FPGAs) in space-based applications such as remote sensing[1] . The use of reconfigurable Field Programmable Gate Arrays (FPGAs) within a spacecraft allows the use of digital circuits that are both application-specific and reprogrammable. Unlike application-specific integrated circuits (ASICs), FPGAs can be configured after the spacecraft has been launched . This flexibility allows the same FPGA resources to be used for multiple instruments, missions, or changing spacecraft objectives . Errors in an FPGA design can be resolved by fixing the incorrect design and reconfiguring the FPGA with an updated configuration bitstream . Further, custom circuit designs can be created to avoid FPGA resources that have failed during the course of the spacecraft mission .

  5. A Design Framework for Awareness Systems

    NASA Astrophysics Data System (ADS)

    Markopoulos, Panos

    This chapter discusses the design of awareness systems, whose main function is a social one, namely, to support social communication, mediated social interactions and eventually relationships of the individuals they connect. We focus especially on connecting friends and family rather than on systems used in the context of collaborative work. Readers interested in this latter kind of applications are referred to the design frameworks by Ginelle and Gutwin (2005) and Gutwin and Greenberg (2002). Below, we outline the relevant design space and the corresponding challenges for the design of awareness systems. The challenges pertain to social aspects of interaction design rather than the technological challenges relating to such systems. As such, they are inspired by Jonathan Grudin’s exposition of design challenges for the domain of groupware applications (Grudin, 1994).

  6. FPGA-based design and implementation of arterial pulse wave generator using piecewise Gaussian-cosine fitting.

    PubMed

    Wang, Lu; Xu, Lisheng; Zhao, Dazhe; Yao, Yang; Song, Dan

    2015-04-01

    Because arterial pulse waves contain vital information related to the condition of the cardiovascular system, considerable attention has been devoted to the study of pulse waves in recent years. Accurate acquisition is essential to investigate arterial pulse waves. However, at the stage of developing equipment for acquiring and analyzing arterial pulse waves, specific pulse signals may be unavailable for debugging and evaluating the system under development. To produce test signals that reflect specific physiological conditions, in this paper, an arterial pulse wave generator has been designed and implemented using a field programmable gate array (FPGA), which can produce the desired pulse waves according to the feature points set by users. To reconstruct a periodic pulse wave from the given feature points, a method known as piecewise Gaussian-cosine fitting is also proposed in this paper. Using a test database that contains four types of typical pulse waves with each type containing 25 pulse wave signals, the maximum residual error of each sampling point of the fitted pulse wave in comparison with the real pulse wave is within 8%. In addition, the function for adding baseline drift and three types of noises is integrated into the developed system because the baseline occasionally wanders, and noise needs to be added for testing the performance of the designed circuits and the analysis algorithms. The proposed arterial pulse wave generator can be considered as a special signal generator with a simple structure, low cost and compact size, which can also provide flexible solutions for many other related research purposes. PMID:25732778

  7. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  8. FPGA based pulsed NQR spectrometer

    NASA Astrophysics Data System (ADS)

    Hemnani, Preeti; Rajarajan, A. K.; Joshi, Gopal; Motiwala, Paresh D.; Ravindranath, S. V. G.

    2014-04-01

    An NQR spectrometer for the frequency range of 1 MHz to 5 MHZ has been designed constructed and tested using an FPGA module. Consisting of four modules viz. Transmitter, Probe, Receiver and computer controlled (FPGA & Software) module containing frequency synthesizer, pulse programmer, mixer, detection and display, the instrument is capable of exciting nuclei with a power of 200W and can detect signal of a few microvolts in strength. 14N signal from NaNO2 has been observed with the expected signal strength.

  9. The Time Domain Crossbar (TDX): A high speed, high density, FPGA design

    SciTech Connect

    Schreiber, A.L.

    1993-12-31

    As system clock rates of electronic designs steadily increase, the need for high bandwidth communication between designs in the system becomes critical. The Time Domain Crossbar (TDX) provides programmable, high-speed communications bandwidth across the user I/O pins of a VME backplane. The TDX timemultiplexes 20MHz byte-wide data onto 80MHz byte-wide data for transmission between boards. A programmable register set allows the user to open and close virtual communicaton channels by configuring independent data paths between sets of boards. Because each additional TDX board provides another crossbar, the overall system bandwidth increases with the number of TDX boards.

  10. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    NASA Astrophysics Data System (ADS)

    Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.

    2013-02-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  11. FPGA implemented testbed in 8-by-8 and 2-by-2 OFDM-MIMO channel estimation and design of baseband transceiver.

    PubMed

    Ramesh, S; Seshasayanan, R

    2016-01-01

    In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks. PMID:27047719

  12. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    SciTech Connect

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  13. Research on the design of surface acquisition system of active lap based on FPGA and FX2LP

    NASA Astrophysics Data System (ADS)

    Zhao, Hongshen; Li, Xiaojin; Fan, Bin; Zeng, Zhige

    2014-08-01

    In order to research the dynamic surface shape changes of active lap during the processing, this paper introduces a dynamic surface shape acquisition system of active lap using FPGA and USB communication. This system consists of high-precision micro-displacement sensor array, acquisition board, PC computer composition, and acquisition circuit board includes six sub-boards based on FPGA, a hub-board based on FPGA and USB communication. A sub-board is responsible for a number of independent channel sensors' data acquisition; hub-board is responsible for creating encoder simulation tools to active lap deformation control system with location information, sending synchronization information to latch the sensor data in all of the sub-boards for a time, while addressing the sub-boards to gather the sensor data in each sub-board one by one and transmitting all the sensor data together with location information via the USB chip FX2LP to the host computer. Experimental results show that the system is capable of fixing the location and speed of active lap, meanwhile the control of surface transforming and dynamic surface data acquisition at a certain location in the processing is implemented.

  14. Designing Educational Software with Students through Collaborative Design Games: The We!Design&Play Framework

    ERIC Educational Resources Information Center

    Triantafyllakos, George; Palaigeorgiou, George; Tsoukalas, Ioannis A.

    2011-01-01

    In this paper, we present a framework for the development of collaborative design games that can be employed in participatory design sessions with students for the design of educational applications. The framework is inspired by idea generation theory and the design games literature, and guides the development of board games which, through the use…

  15. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  16. A maximum likelihood framework for protein design

    PubMed Central

    Kleinman, Claudia L; Rodrigue, Nicolas; Bonnard, Cécile; Philippe, Hervé; Lartillot, Nicolas

    2006-01-01

    Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces shaping protein sequences, and

  17. An OER Architecture Framework: Needs and Design

    ERIC Educational Resources Information Center

    Khanna, Pankaj; Basak, P. C.

    2013-01-01

    This paper describes an open educational resources (OER) architecture framework that would bring significant improvements in a well-structured and systematic way to the educational practices of distance education institutions of India. The OER architecture framework is articulated with six dimensions: pedagogical, technological, managerial,…

  18. Towards a dissipativity framework for power system stabilizer design

    SciTech Connect

    Jacobson, C.A.; Stankovic, A.M.; Tadmor, G.; Stevens, M.A.

    1996-11-01

    This paper describes a dissipativity-based framework for the study of low-frequency oscillations in power systems and for power system stabilizer design. This framework leads to a robust controller design formulation, amenable to both H{sub {infinity}} and QFT tools. An illustrating numerical example presents QFT based design for a widely used benchmark two area, four machine power system.

  19. A FPGA Implementation of JPEG Baseline Encoder for Wearable Devices

    PubMed Central

    Li, Yuecheng; Jia, Wenyan; Luan, Bo; Mao, Zhi-hong; Zhang, Hong; Sun, Mingui

    2015-01-01

    In this paper, an efficient field-programmable gate array (FPGA) implementation of the JPEG baseline image compression encoder is presented for wearable devices in health and wellness applications. In order to gain flexibility in developing FPGA-specific software and balance between real-time performance and resources utilization, A High Level Synthesis (HLS) tool is utilized in our system design. An optimized dataflow configuration with a padding scheme simplifies the timing control for data transfer. Our experiments with a system-on-chip multi-sensor system have verified our FPGA implementation with respect to real-time performance, computational efficiency, and FPGA resource utilization. PMID:26190911

  20. Radiation Tolerant Antifuse FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian; McCollum, John; Parker, Wanida; Katz, Rich; Kleyner, Igor; Day, John H. (Technical Monitor)

    2002-01-01

    The total dose performance of the antifuse FPGA for space applications is summarized. Optimization of the radiation tolerance in the fabless model is the main theme. Mechanisms to explain the variation in different products are discussed.

  1. Building a Framework for Engineering Design Experiences in High School

    ERIC Educational Resources Information Center

    Denson, Cameron D.; Lammi, Matthew

    2014-01-01

    In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…

  2. A Design Framework for Online Teacher Professional Development Communities

    ERIC Educational Resources Information Center

    Liu, Katrina Yan

    2012-01-01

    This paper provides a design framework for building online teacher professional development communities for preservice and inservice teachers. The framework is based on a comprehensive literature review on the latest technology and epistemology of online community and teacher professional development, comprising four major design factors and three…

  3. FPGA developments for the SPARTA project

    NASA Astrophysics Data System (ADS)

    Goodsell, S. J.; Fedrigo, E.; Dipper, N. A.; Donaldson, R.; Geng, D.; Myers, R. M.; Saunter, C. D.; Soenke, C.

    2005-08-01

    The European Southern Observatory (ESO) and Durham University's Centre for Advanced Instrumentation (CfAI) are currently designing a standard next generation Adaptive Optics (AO) Real-Time Control System. This platform, labelled SPARTA 'Standard Platform for Adaptive optics Real-Time Applications' will initially control the AO systems for ESO's 2nd generation VLT instruments, and will scale to implement the initial AO systems for ESO's future 100m telescope OWL. Durham's main task is to develop the Wavefront Sensor (WFS) front end and Statistical Machinery for the SPARTA platform using Field Programmable Gate Arrays (FPGA). SPARTA takes advantage of a FPGA device to alleviate the highly parallel computationally intensive tasks from the system processors, increasing the obtainable control loop frequency and reducing the computational latency in the control system. The WFS pixel stream enters a PMC hosted FPGA card contained within the SPARTA platform via optical fibres carrying the VITA 17.18/10 standard 2.5Gbps-1 serial Front Panel Data Port (sFPDP) protocol. Each FPGA board can receive a maximum of 10Gbs-1 of data via on-board optical transceivers. The FPGA device reduces WFS frames to gradient vectors before passing the data to the system processors. The FPGA allows the processors to deal with other tasks such as wavefront reconstruction, telemetry and real-time data recording, allowing for more complex adaptive control algorithms to be executed. This paper overviews the SPARTA requirements and current platform architecture, Durham's Wavefront Processor FPGA design and it concludes with a future plan of work.

  4. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Gromov, Konstantin G.; Konefat, Edward H.

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of a Field Programmable Gate Array (FPGA) used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and control storage of telemetry data from multiple sensors throughout launch, ascent, deployment and descent phases of the subsonic parachute test.

  5. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Konefat, Edward H.; Gromovt, Konstantin

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of an FPGA used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and store data from multiple sensors at multiple rates during launch, ascent, deployment and descent phases of the subsonic parachute test.

  6. Public Key FPGA Software

    Energy Science and Technology Software Center (ESTSC)

    2013-07-25

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  7. Design and implementation of a multiband digital filter using FPGA to extract the ECG signal in the presence of different interference signals.

    PubMed

    Aboutabikh, Kamal; Aboukerdah, Nader

    2015-07-01

    In this paper, we propose a practical way to synthesize and filter an ECG signal in the presence of four types of interference signals: (1) those arising from power networks with a fundamental frequency of 50Hz, (2) those arising from respiration, having a frequency range from 0.05 to 0.5Hz, (3) muscle signals with a frequency of 25Hz, and (4) white noise present within the ECG signal band. This was done by implementing a multiband digital filter (seven bands) of type FIR Multiband Least Squares using a digital programmable device (Cyclone II EP2C70F896C6 FPGA, Altera), which was placed on an education and development board (DE2-70, Terasic). This filter was designed using the VHDL language in the Quartus II 9.1 design environment. The proposed method depends on Direct Digital Frequency Synthesizers (DDFS) designed to synthesize the ECG signal and various interference signals. So that the synthetic ECG specifications would be closer to actual ECG signals after filtering, we designed in a single multiband digital filter instead of using three separate digital filters LPF, HPF, BSF. Thus all interference signals were removed with a single digital filter. The multiband digital filter results were studied using a digital oscilloscope to characterize input and output signals in the presence of differing sinusoidal interference signals and white noise. PMID:25912983

  8. Optoelectronic date acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liu, Chunyang; Song, De; Tong, Zhiguo; Liu, Xiangqing

    2015-11-01

    An optoelectronic date acquisition system is designed based on FPGA. FPGA chip that is EP1C3T144C8 of Cyclone devices from Altera corporation is used as the centre of logic control, XTP2046 chip is used as A/D converter, host computer that communicates with the date acquisition system through RS-232 serial communication interface are used as display device and photo resistance is used as photo sensor. We use Verilog HDL to write logic control code about FPGA. It is proved that timing sequence is correct through the simulation of ModelSim. Test results indicate that this system meets the design requirement, has fast response and stable operation by actual hardware circuit test.

  9. Interior Design Education within a Human Ecological Framework

    ERIC Educational Resources Information Center

    Kaup, Migette L.; Anderson, Barbara G.; Honey, Peggy

    2007-01-01

    An education based in human ecology can greatly benefit interior designers as they work to understand and improve the human condition. Design programs housed in colleges focusing on human ecology can improve the interior design profession by taking advantage of their home base and emphasizing the human ecological framework in the design curricula.…

  10. A Design Framework for Syllabus Generator

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2008-01-01

    A well-designed syllabus provides students with a roadmap for an engaging and successful learning experience, whereas a poorly designed syllabus impedes communication between faculty and students, increases student anxiety and potential complaints, and reduces overall teaching effectiveness. In an effort to facilitate, streamline, and improve…

  11. Virtual Reality Hypermedia Design Frameworks for Science Instruction.

    ERIC Educational Resources Information Center

    Maule, R. William; Oh, Byron; Check, Rosa

    This paper reports on a study that conceptualizes a research framework to aid software design and development for virtual reality (VR) computer applications for instruction in the sciences. The framework provides methodologies for the processing, collection, examination, classification, and presentation of multimedia information within hyperlinked…

  12. A Framework for Designing Cluster Randomized Trials with Binary Outcomes

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Martinez, Andres

    2011-01-01

    The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…

  13. ADC and TDC implemented using FPGA

    SciTech Connect

    Wu, Jinyuan; Hansen, Sten; Shi, Zonghan; /Fermilab

    2007-11-01

    Several tests of FPGA devices programmed as analog waveform digitizers are discussed. The ADC uses the ramping-comparing scheme. A multi-channel ADC can be implemented with only a few resistors and capacitors as external components. A periodic logic levels are shaped by passive RC network to generate exponential ramps. The FPGA differential input buffers are used as comparators to compare the ramps with the input signals. The times at which these ramps cross the input signals are digitized by time-to-digital-converters (TDCs) implemented within the FPGA. The TDC portion of the logic alone has potentially a broad range of HEP/nuclear science applications. A 96-channel TDC card using FPGAs as TDCs being designed for the Fermilab MIPP electronics upgrade project is discussed. A deserializer circuit based on multisampling circuit used in the TDC, the 'Digital Phase Follower' (DPF) is also documented.

  14. FPGA based Smart Wireless MIMO Control System

    NASA Astrophysics Data System (ADS)

    Usman Ali, Syed M.; Hussain, Sajid; Akber Siddiqui, Ali; Arshad, Jawad Ali; Darakhshan, Anjum

    2013-12-01

    In our present work, we have successfully designed, and developed an FPGA based smart wireless MIMO (Multiple Input & Multiple Output) system capable of controlling multiple industrial process parameters such as temperature, pressure, stress and vibration etc. To achieve this task we have used Xilin x Spartan 3E FPGA (Field Programmable Gate Array) instead of conventional microcontrollers. By employing FPGA kit to PC via RF transceivers which has a working range of about 100 meters. The developed smart system is capable of performing the control task assigned to it successfully. We have also provided a provision to our proposed system that can be accessed for monitoring and control through the web and GSM as well. Our proposed system can be equally applied to all the hazardous and rugged industrial environments where a conventional system cannot work effectively.

  15. Compilation Techniques for Core Plus FPGA Systems

    NASA Technical Reports Server (NTRS)

    Conte, Tom

    2001-01-01

    The overall system architecture targeted in this study is a core-plus-fpga design, which is composed of a core VLIW DSP with on-chip memory and a set of special-purpose functional units implemented using FPGAs. A figure is given which shows the overall organization of the core-plus-fpga system. It is important to note that this architecture is relatively simple in concept and can be built from off-the-shelf commercial components, such as one of the Texas Instruments 320C6x family of DSPs for the core processor.

  16. A design framework for exploratory geovisualization in epidemiology

    PubMed Central

    Robinson, Anthony C.

    2009-01-01

    This paper presents a design framework for geographic visualization based on iterative evaluations of a toolkit designed to support cancer epidemiology. The Exploratory Spatio-Temporal Analysis Toolkit (ESTAT), is intended to support visual exploration through multivariate health data. Its purpose is to provide epidemiologists with the ability to generate new hypotheses or further refine those they may already have. Through an iterative user-centered design process, ESTAT has been evaluated by epidemiologists at the National Cancer Institute (NCI). Results of these evaluations are discussed, and a design framework based on evaluation evidence is presented. The framework provides specific recommendations and considerations for the design and development of a geovisualization toolkit for epidemiology. Its basic structure provides a model for future design and evaluation efforts in information visualization. PMID:20390052

  17. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  18. Towards a Framework for Professional Curriculum Design

    ERIC Educational Resources Information Center

    Winch, Christopher

    2015-01-01

    Recent reviews of vocational qualifications in England have noted problems with their restricted nature. However, the underlying issue of how to conceptualise professional agency in curriculum design has not been properly addressed, either by the Richard or the Whitehead reviews. Drawing on comparative work in England and Europe it is argued that…

  19. A Framework for the Design of Service Systems

    NASA Astrophysics Data System (ADS)

    Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris

    We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.

  20. MIDAS: a framework for integrated design and manufacturing process

    NASA Astrophysics Data System (ADS)

    Chung, Moon Jung; Kwon, Patrick; Pentland, Brian

    2000-10-01

    In this paper, we present a development of a framework for managing design and manufacturing process in a distributed environment. The framework offers the following facilities: (1) to represent the complicated engineering design processes (2) to coordinate design activities and execute the process in a distributed environment and (3) to support a collaborative design by sharing data and processes. In this paper, the process flow graphs, which consist in tasks and the corresponding input and output data, are used to depict the engineering design process on a process modeling browser. The engineering activities in the represented processes can be executed in a distributed environment through the cockpit of the framework. The communication among the related engineers to support a collaborative design is made on the collaborative design browser with SML underlying data structures of representing process information to make the framework extensible and platform- independent. The formal and flexible approach of the proposed framework to integrate the engineering design processes can be also effectively applied to coordinate concurrent engineering activities in a distributed environment.

  1. A Virtual Reality System Framework for Industrial Product Design Validation

    NASA Astrophysics Data System (ADS)

    Ladeveze, Nicolas; Sghaier, Adel; Fourquet, Jean Yves

    2009-03-01

    This paper presents a virtual reality simulation architecture intended to improve the product parts design quality and the way to take into account manufacturing and maintenance requests in order to reduce the cost and time of the products design. This architecture merges previous studies into a unique framework dedicated to product pre design. Using several interfaces, this architecture allows a fast pre designed product validation on a large scope from the multi physics computation to the maintainability studies.

  2. CROC FPGA Firmware

    Energy Science and Technology Software Center (ESTSC)

    2009-12-01

    The CROC FPGA firmware code controls the operation of CROC hardware primarily deterinining the location of neutron events and discriminating against false trigger by examining the output of multiple analog comparators. A number of stoical algorithms are encode within the firmware to achieve reliable operation. Other communication and control functions are also part of the firmware.

  3. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  4. Design of single object model of software reuse framework

    NASA Astrophysics Data System (ADS)

    Yan, Liu

    2011-12-01

    In order to embody the reuse significance of software reuse framework fully, this paper will analyze in detail about the single object model mentioned in the article "The overall design of software reuse framework" and induce them as add and delete and modify mode, check mode, and search and scroll and display integrated mode. Three modes correspond to their own interface design template, class and database design concept. The modelling idea helps developers clear their minds and speed up. Even laymen can complete the development task easily.

  5. A Comprehensive Learning Event Design Using a Communication Framework

    ERIC Educational Resources Information Center

    Bower, Robert L.

    1975-01-01

    A learning event design for accountability uses a communications framework. The example given is a slide presentation on the invasion of Cuba during the Spanish-American War. Design components include introduction, objectives, media, involvement plans, motivation, bibliography, recapitulation, involvement sheets, evaluation, stimulus-response…

  6. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. PMID:25956618

  7. Learning Experience as Transaction: A Framework for Instructional Design

    ERIC Educational Resources Information Center

    Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.

    2011-01-01

    This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…

  8. Proposal of ROS-compliant FPGA component for low-power robotic systems

    NASA Astrophysics Data System (ADS)

    Li, Rong; Quan, Lei; Cai, YouLin

    2015-12-01

    In recent years, robots are required to be autonomous and their robotic software are sophisticated. Robots have a problem of insufficient performance, since it cannot equip with a high-performance microprocessor due to battery-power operation. On the other hand, FPGA devices can accelerate specific functions in a robot system without increasing power consumption by implementing customized circuits. But it is difficult to introduce FPGA devices into a robot due to large development cost of an FPGA circuit compared to software. Therefore, in this study, we propose an FPGA component technology for an easy integration of an FPGA into robots, which is compliant with ROS (Robot Operating System). As a case study, we designed ROS-compliant FPGA component of image labeling using Xilinx Zynq platform. The developed ROS-component FPGA component performs 1.7 times faster compared to the ordinary ROS software component.

  9. CAD framework concept for the design of integrated microsystems

    NASA Astrophysics Data System (ADS)

    Poppe, Andras; Rencz, Marta; Szekely, Vladimir; Karam, Jean Michel; Courtois, Bernard; Hofmann, K.; Glesner, M.

    1995-09-01

    Besides foundry facilities, CAD-tools are also required to move microsystems from research prototypes to an industrial market. CAD tools of microelectronics have been developed for more than 20 years, both in the field of circuit design tools and in the area of TCAD tools. Usually a microelectronics engineer is involved only in one side of the design: either he deals with application design or he participates in the manufacturing design, but not in both. This is one point that is to be followed in case of microsystem design, if higher level of design productivity is expected. Another point is that certain standards should also be established in case of microsystem design too: based on selected technologies a set of standard components should be predesigned and collected in a standard component library. This component library should be available from within microsystem design frameworks which might well be established by a proper configuration and extension of existing IC design frameworks. A very important point is the development of proper simulation models of microsystem components that are based on e.g. the FEM results of the predesign phase and are provided in the form of an analog VHDL script. After detailing the above mentioned considerations we discuss the development work concerning a microsystem design framework. Its goal is to provide a set of powerful tools for microsystem application designers. This future framework will be composed of different industry-standard CAD programs and different design databases which in certain cases are completed with special interfaces and special purpose simulation tools.

  10. Project Assessment Framework through Design (PAFTD) - A Project Assessment Framework in Support of Strategic Decision Making

    NASA Technical Reports Server (NTRS)

    Depenbrock, Brett T.; Balint, Tibor S.; Sheehy, Jeffrey A.

    2014-01-01

    Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our Project Assessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.

  11. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  12. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  13. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  14. Step-by-Step Design of an FPGA-Based Digital Compensator for DC/DC Converters Oriented to an Introductory Course

    ERIC Educational Resources Information Center

    Zumel, P.; Fernandez, C.; Sanz, M.; Lazaro, A.; Barrado, A.

    2011-01-01

    In this paper, a short introductory course to introduce field-programmable gate array (FPGA)-based digital control of dc/dc switching power converters is presented. Digital control based on specific hardware has been at the leading edge of low-medium power dc/dc switching converters in recent years. Besides industry's interest in this topic, from…

  15. Activity Theory as a Framework For Designing Constructivist Learning Environments.

    ERIC Educational Resources Information Center

    Jonassen, David H.; Rohrer-Murphy, Lucia

    1999-01-01

    Defines activity theory as a socio-cultural and socio-historical lens through which the interaction of human activity and consciousness within its relevant environmental context can be analyzed. Describes how activity theory can be used as a framework for analyzing activities and settings for the purpose of designing constructivist learning…

  16. Sustainable Supply Chain Design by the P-Graph Framework

    EPA Science Inventory

    The present work proposes a computer-aided methodology for designing sustainable supply chains in terms of sustainability metrics by resorting to the P-graph framework. The methodology is an outcome of the collaboration between the Office of Research and Development (ORD) of the ...

  17. A Review of Literacy Frameworks for Learning Environments Design

    ERIC Educational Resources Information Center

    Rebmann, Kristen Radsliff

    2013-01-01

    This article charts the development of three literacy research frameworks: multiliteracies, new literacies, and popular literacies. By reviewing the literature surrounding three current conceptions of literacy, an attempt is made to form an integrative grouping that captures the most relevant elements of each for learning environments design.…

  18. TARDIS: An Automation Framework for JPL Mission Design and Navigation

    NASA Technical Reports Server (NTRS)

    Roundhill, Ian M.; Kelly, Richard M.

    2014-01-01

    Mission Design and Navigation at the Jet Propulsion Laboratory has implemented an automation framework tool to assist in orbit determination and maneuver design analysis. This paper describes the lessons learned from previous automation tools and how they have been implemented in this tool. In addition this tool has revealed challenges in software implementation, testing, and user education. This paper describes some of these challenges and invites others to share their experiences.

  19. FPGA control utility in JAVA

    NASA Astrophysics Data System (ADS)

    Drabik, Paweł; Pozniak, Krzysztof T.

    2008-01-01

    Processing of large amount of data for high energy physics experiments is modeled here in a form of a multichannel, distributed measurement system based on photonic and electrical modules. A method to control such a system is presented in this paper. This method is based on a new method of address space management called the Component Internal Interface (CII). An updatable and configurable environment provided by FPGA fulfills technological and functional demands imposed on complex measurement systems of the considered kind. A purpose, design process and realization of the object oriented software application, written in the high level code described. A few examples of usage of the suggested application is presented. The application is intended for usage in HEP experiments and FLASH, XFEL lasers.

  20. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  1. Study of a Fine Grained Threaded Framework Design

    NASA Astrophysics Data System (ADS)

    Jones, C. D.

    2012-12-01

    Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.

  2. Multigrid shallow water equations on an FPGA

    NASA Astrophysics Data System (ADS)

    Jeffress, Stephen; Duben, Peter; Palmer, Tim

    2015-04-01

    A novel computing technology for multigrid shallow water equations is investigated. As power consumption begins to constrain traditional supercomputing advances, weather and climate simulators are exploring alternative technologies that achieve efficiency gains through massively parallel and low power architectures. In recent years FPGA implementations of reduced complexity atmospheric models have shown accelerated speeds and reduced power consumption compared to multi-core CPU integrations. We continue this line of research by designing an FPGA dataflow engine for a mulitgrid version of the 2D shallow water equations. The multigrid algorithm couples grids of variable resolution to improve accuracy. We show that a significant reduction of precision in the floating point representation of the fine grid variables allows greater parallelism and thus improved overall peformance while maintaining accurate integrations. Preliminary designs have been constructed by software emulation. Results of the hardware implementation will be presented at the conference.

  3. FPGA Based Reconfigurable ATM Switch Test Bed

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Jones, Robert E.

    1998-01-01

    Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.

  4. Real-time panoramic infrared imaging system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhang, Hao-Jun; Shen, Yong-Ge

    2010-11-01

    During the past decades, signal processing architecture, which is based on FPGA, conventional DSP processor and host computer, is popular for infrared or other electro-optical systems. With the increasing processing requirement, the former architecture starts to show its limitation in several respects. This paper elaborates a solution based on FPGA for panoramic imaging system as our first step of upgrading the processing module to System-on-Chip (SoC) solution. Firstly, we compare this new architecture with the traditional to show its superiority mainly in the video processing ability, reduction in the development workload and miniaturization of the system architecture. Afterwards, this paper provides in-depth description of this imaging system, including the system architecture and its function, and addresses several related issues followed by the future development. FPGA has developed so rapidly during the past years, not only in silicon device but also in the design flow and tools. In the end, we briefly present our future system development and introduce those new design tools to make up the limitation of the traditional FPGA design methodology. The advanced design flow through Simulink and Xilinx System Generator (Sysgen) has been elaborated, which enables engineers to develop sophisticated DSP algorithms and implement them in FPGA more efficiently. It is believed that this new design approach can shorten system design cycle by allowing rapid prototyping and refining design process.

  5. FPGA Implementation of Reed-Solomon Decoder for IEEE 802.16 WiMAX Systems using Simulink-Sysgen Design Environment

    SciTech Connect

    Bobrek, Miljko; Albright, Austin P

    2012-01-01

    This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.

  6. SysSon - A Framework for Systematic Sonification Design

    NASA Astrophysics Data System (ADS)

    Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns

    2015-04-01

    SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.

  7. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  8. Design and Applications of a Multimodality Image Data Warehouse Framework

    PubMed Central

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  9. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher

  10. A Human Factors Framework for Payload Display Design

    NASA Technical Reports Server (NTRS)

    Dunn, Mariea C.; Hutchinson, Sonya L.

    1998-01-01

    During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.

  11. When Playing Meets Learning: Methodological Framework for Designing Educational Games

    NASA Astrophysics Data System (ADS)

    Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich

    Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.

  12. Regular FPGA based on regular fabric

    NASA Astrophysics Data System (ADS)

    Xun, Chen; Jianwen, Zhu; Minxuan, Zhang

    2011-08-01

    In the sub-wavelength regime, design for manufacturability (DFM) becomes increasingly important for field programmable gate arrays (FPGAs). In this paper, an automated tile generation flow targeting micro-regular fabric is reported. Using a publicly accessible, well-documented academic FPGA as a case study, we found that compared to the tile generators previously reported, our generated micro-regular tile incurs less than 10% area overhead, which could be potentially recovered by process window optimization, thanks to its superior printability. In addition, we demonstrate that on 45 nm technology, the generated FPGA tile reduces lithography induced process variation by 33%, and reduce probability of failure by 21.2%. If a further overhead of 10% area can be recovered by enhanced resolution, we can achieve the variation reduction of 93.8% and reduce the probability of failure by 16.2%.

  13. 3D FFTs on a Single FPGA

    PubMed Central

    Humphries, Benjamin; Zhang, Hansen; Sheng, Jiayi; Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    The 3D FFT is critical in many physical simulations and image processing applications. On FPGAs, however, the 3D FFT was thought to be inefficient relative to other methods such as convolution-based implementations of multi-grid. We find the opposite: a simple design, operating at a conservative frequency, takes 4μs for 163, 21μs for 323, and 215μs for 643 single precision data points. The first two of these compare favorably with the 25μs and 29μs obtained running on a current Nvidia GPU. Some broader significance is that this is a critical piece in implementing a large scale FPGA-based MD engine: even a single FPGA is capable of keeping the FFT off of the critical path for a large fraction of possible MD simulations. PMID:26594666

  14. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Pellish, Jonathan

    2015-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  15. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  16. FPGA Flash Memory High Speed Data Acquisition

    NASA Technical Reports Server (NTRS)

    Gonzalez, April

    2013-01-01

    The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.

  17. An enhanced BSIM modeling framework for selfheating aware circuit design

    NASA Astrophysics Data System (ADS)

    Schleyer, M.; Leuschner, S.; Baumgartner, P.; Mueller, J.-E.; Klar, H.

    2014-11-01

    This work proposes a modeling framework to enhance the industry-standard BSIM4 MOSFET models with capabilities for coupled electro-thermal simulations. An automated simulation environment extracts thermal information from model data as provided by the semiconductor foundry. The standard BSIM4 model is enhanced with a Verilog-A based wrapper module, adding thermal nodes which can be connected to a thermal-equivalent RC network. The proposed framework allows a fully automated extraction process based on the netlist of the top-level design and the model library. A numerical analysis tool is used to control the extraction flow and to obtain all required parameters. The framework is used to model self-heating effects on a fully integrated class A/AB power amplifier (PA) designed in a standard 65 nm CMOS process. The PA is driven with +30 dBm output power, leading to an average temperature rise of approximately 40 °C over ambient temperature.

  18. A computational molecular design framework for crosslinked polymer networks.

    PubMed

    Eslick, J C; Ye, Q; Park, J; Topp, E M; Spencer, P; Camarda, K V

    2009-05-21

    Crosslinked polymers are important in a very wide range of applications including dental restorative materials. However, currently used polymeric materials experience limited durability in the clinical oral environment. Researchers in the dental polymer field have generally used a time-consuming experimental trial-and-error approach to the design of new materials. The application of computational molecular design (CMD) to crosslinked polymer networks has the potential to facilitate development of improved polymethacrylate dental materials. CMD uses quantitative structure property relations (QSPRs) and optimization techniques to design molecules possessing desired properties. This paper describes a mathematical framework which provides tools necessary for the application of CMD to crosslinked polymer systems. The novel parts of the system include the data structures used, which allow for simple calculation of structural descriptors, and the formulation of the optimization problem. A heuristic optimization method, Tabu Search, is used to determine candidate monomers. Use of a heuristic optimization algorithm makes the system more independent of the types of QSPRs used, and more efficient when applied to combinatorial problems. A software package has been created which provides polymer researchers access to the design framework. A complete example of the methodology is provided for polymethacrylate dental materials. PMID:23904665

  19. Synergy: A language and framework for robot design

    NASA Astrophysics Data System (ADS)

    Katragadda, Lalitesh Kumar

    Due to escalation in complexity, capability and application, robot design is increasingly difficult. A design environment can automate many design tasks, relieving the designer's burden. Prior to robot development, designers compose a robot from existing or custom developed components, simulate performance, optimize configuration and parameters, and write software for the robot. Robot designers customize these facets to the robot using a variety of software ranging from spreadsheets to C code to CAD tools. Valuable resources are expended, and very little of this expertise and development is reusable. This research begins with the premise that a language to comprehensively represent robots is lacking and that the aforementioned design tasks can be automated once such a language exists. This research proposes and demonstrates the following thesis: "A language to represent robots, along with a framework to generate simulations, optimize designs and generate control software, increases the effectiveness of design." Synergy is the software developed in this research to reflect this philosophy. Synergy was prototyped and demonstrated in the context of lunar rover design, a challenging real-world problem with multiple requirements and a broad design space. Synergy was used to automatically optimize robot parameters and select parts to generate effective designs, while meeting constraints of the embedded components and sub-systems. The generated designs are superior in performance and consistency when compared to designs by teams of designers using the same knowledge. Using a single representation, multiple designs are generated for four distinct lunar exploration objectives. Synergy uses the same representation to auto-generate landing simulations and simultaneously generate control software for the landing. Synergy consists of four software agents. A database and spreadsheet agent compiles the design and component information, generating component interconnections and

  20. Ecohydrology frameworks for green infrastructure design and ecosystem service provision

    NASA Astrophysics Data System (ADS)

    Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.

    2014-12-01

    Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.

  1. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; Rogez, Francois; Rosen, Paul; Shah, Biren; Taft, Stephanie

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  2. A Framework for Designing Scaffolds That Improve Motivation and Cognition

    PubMed Central

    Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.

    2013-01-01

    A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b) problem-based learning and scaffolding as one way to help, (c) how scaffolding has strayed from what was originally equal parts motivational and cognitive support, and (d) a conceptual framework for the design of scaffolds that can enhance motivation as well as cognitive outcomes. We propose guidelines for the design of computer-based scaffolds to promote motivation and engagement while students are solving authentic problems. Remaining questions and suggestions for future research are then discussed. PMID:24273351

  3. Microsystem design framework based on tool adaptations and library developments

    NASA Astrophysics Data System (ADS)

    Karam, Jean Michel; Courtois, Bernard; Rencz, Marta; Poppe, Andras; Szekely, Vladimir

    1996-09-01

    Besides foundry facilities, Computer-Aided Design (CAD) tools are also required to move microsystems from research prototypes to an industrial market. This paper describes a Computer-Aided-Design Framework for microsystems, based on selected existing software packages adapted and extended for microsystem technology, assembled with libraries where models are available in the form of standard cells described at different levels (symbolic, system/behavioral, layout). In microelectronics, CAD has already attained a highly sophisticated and professional level, where complete fabrication sequences are simulated and the device and system operation is completely tested before manufacturing. In comparison, the art of microsystem design and modelling is still in its infancy. However, at least for the numerical simulation of the operation of single microsystem components, such as mechanical resonators, thermo-elements, elastic diaphragms, reliable simulation tools are available. For the different engineering disciplines (like electronics, mechanics, optics, etc) a lot of CAD-tools for the design, simulation and verification of specific devices are available, but there is no CAD-environment within which we could perform a (micro-)system simulation due to the different nature of the devices. In general there are two different approaches to overcome this limitation: the first possibility would be to develop a new framework tailored for microsystem-engineering. The second approach, much more realistic, would be to use the existing CAD-tools which contain the most promising features, and to extend these tools so that they can be used for the simulation and verification of microsystems and of the devices involved. These tools are assembled with libraries in a microsystem design environment allowing a continuous design flow. The approach is driven by the wish to make microsystems accessible to a large community of people, including SMEs and non-specialized academic institutions.

  4. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  5. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA. PMID:26643079

  6. Genetic apertures: an improved sparse aperture design framework.

    PubMed

    Salvaggio, Philip S; Schott, John R; McKeown, Donald M

    2016-04-20

    The majority of optical sparse aperture imaging research in the remote sensing field has been confined to a small set of aperture layouts. While these layouts possess some desirable properties for imaging, they may not be ideal for all applications. This work introduces an optimization framework for sparse aperture layouts based on genetic algorithms as well as a small set of fitness functions for incoherent sparse aperture image quality. The optimization results demonstrate the merits of existing designs and the opportunity for creating new sparse aperture layouts. PMID:27140086

  7. A Robust Control Design Framework for Substructure Models

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    1994-01-01

    A framework for designing control systems directly from substructure models and uncertainties is proposed. The technique is based on combining a set of substructure robust control problems by an interface stiffness matrix which appears as a constant gain feedback. Variations of uncertainties in the interface stiffness are treated as a parametric uncertainty. It is shown that multivariable robust control can be applied to generate centralized or decentralized controllers that guarantee performance with respect to uncertainties in the interface stiffness, reduced component modes and external disturbances. The technique is particularly suited for large, complex, and weakly coupled flexible structures.

  8. Design of Functional Materials with Hydrogen-Bonded Host Frameworks

    NASA Astrophysics Data System (ADS)

    Soegiarto, Airon Cosanova

    The properties of molecular crystals are governed by the attributes of their molecular constituents and their solid-state arrangements, making control of crystal packing paramount when designing new materials with targeted functions. One effective strategy involves the use of robust host frameworks that encapsulate functional guests in molecular-scale cavities with tailored shapes, sizes, and chemical environments that enable systematic regulation of solid state properties. This approach promises to simplify the synthesis of molecular materials by decoupling the design of structure, provided by the host framework, from function, introduced by the guests. This thesis has reported a series of crystalline, structurally robust hosts based on guanidinium cations (G = (C(NH2) 3 +) and the sulfonate moieties of organodisulfonate anions (DS; S = -O3S-R-SO3 -). The host framework is based on layers of 2-D GS sheet, which are interconnected by the organic residues (pillars) of the disulfonates, thereby producing a lamellar architecture with inclusion cavities, occupied by guest molecules, between the sheets. Notably, the GDS inclusion compounds exhibit numerous architectures such as bilayer, simple brick, and zigzag brick -- each endowed with uniquely sized and shaped cavities, suggesting that the aggregation motifs of the included guests can be controlled within the host lattice. Furthermore, the selectivity toward different architectures is governed by the relative size of the pillars and guests, allowing the construction of a "structural phase diagram" which can be used to predict the solid-state architecture of untested host-guest combination. Consequently, a variety of functional molecules have been included in order to exploit these features. Chapter 3 reports the inclusion of polyconjugated molecules within the GDS hosts, generating various guest aggregation motifs -- edge-to-edge to face-to-edge to end-to-end. The effects of the various host and/or guest aggregation

  9. SEU mitigation strategies for SRAM-based FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Pei; Zhang, Jian

    2011-08-01

    The type of Field Programmable Gate Arrays (FPGAs) technology and device family used in a design is a key factor for system reliability. Though antifuse-based FPGAs are widely used in aerospace because of their high reliability, current antifuse-based FPGA devices are expensive and leave no room for mistakes or changes since they are not reprogrammable. The substitute for antifuse-based FPGAs are needed in aerospace design, they should be both reprogrammable and highly reliable to Single Event Upset effects (SEUs). SRAM-based FPGAs are widely and systematically used in complex embedding digital systems both in a single chip industry and commercial applications. They are reprogrammable and high in density because of the smaller SRAM cells and logic structures. But the SRAM-based FPGAs are especially sensitive to cosmic radiation because the configuration information is stored in SRAM memory. The ideal FPGA for aerospace use should be high-density SRAM-based which is also insensitive to cosmic radiation induced SEUs. Therefore, in order to enable the use of SRAM-based FPGAs in safety critical applications, new techniques and strategies are essential to mitigate the SEU errors in such devices. In order to improve the reliability of SRAM-based FPGAs which are very sensitive to SEU errors, techniques such as reconfiguration and Triple Module Redundancy (TMR) are widely used in the aerospace electronic systems to mitigate the SEU and Single Event Functional Interrupt (SEFI) errors. Compared to reconfiguration and triplication, scrubbing and partial reconfiguration will utilize fewer or even no internal resources of FPGA. What's more, the detection and repair process can detect and correct SEU errors in configuration memories of the FPGA without affecting or interrupting the proper working of the system while reconfiguration would terminate the operation of the FPGA. This paper presents a payload system realized on Xilinx Virtex-4 FPGA which mitigates SEU effects in the

  10. FPGA Boot Loader and Scrubber

    NASA Technical Reports Server (NTRS)

    Wade, Randall S.; Jones, Bailey

    2009-01-01

    A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").

  11. Design of crashworthy structures with controlled behavior in HCA framework

    NASA Astrophysics Data System (ADS)

    Bandi, Punit

    The field of crashworthiness design is gaining more interest and attention from automakers around the world due to increasing competition and tighter safety norms. In the last two decades, topology and topometry optimization methods from structural optimization have been widely explored to improve existing designs or conceive new designs with better crashworthiness. Although many gradient-based and heuristic methods for topology- and topometry-based crashworthiness design are available these days, most of them result in stiff structures that are suitable only for a set of vehicle components in which maximizing the energy absorption or minimizing the intrusion is the main concern. However, there are some other components in a vehicle structure that should have characteristics of both stiffness and flexibility. Moreover, the load paths within the structure and potential buckle modes also play an important role in efficient functioning of such components. For example, the front bumper, side frame rails, steering column, and occupant protection devices like the knee bolster should all exhibit controlled deformation and collapse behavior. The primary objective of this research is to develop new methodologies to design crashworthy structures with controlled behavior. The well established Hybrid Cellular Automaton (HCA) method is used as the basic framework for the new methodologies, and compliant mechanism-type (sub)structures are the highlight of this research. The ability of compliant mechanisms to efficiently transfer force and/or motion from points of application of input loads to desired points within the structure is used to design solid and tubular components that exhibit controlled deformation and collapse behavior under crash loads. In addition, a new methodology for controlling the behavior of a structure under multiple crash load scenarios by adaptively changing the contributions from individual load cases is developed. Applied to practical design problems

  12. FPGA-based Hyperspectral Covariance Coprocessor for Size, Weight, and Power Constrained Platforms

    NASA Astrophysics Data System (ADS)

    Kusinsky, David Alan

    Hyperspectral imaging (HSI) is a method of remote sensing that collects many two-dimensional images of the same physical scene. Each image corresponds to a single wavelength band in the electromagnetic spectrum. The number of bands imaged by an HSI sensor can be several hundred, and therefore a large amount of data is produced. This data must be handled by the platform on which the HSI sensor resides, either through onboard processing, or relaying elsewhere. Hence, the platform plays an important role in defining the capabilities of the entire remote sensing system. Size, weight, and power (SWaP) are important factors in the design of any remote sensing platform. These remote sensing platforms, such as Unmanned Air Vehicles and microsatellites, are continually decreasing in size. This creates a need for remote sensing and image processing hardware that consumes less area, weight, and power, while delivering processing performance. The purpose of this research is to design and characterize an FPGA-based hardware coprocessor that parallelizes the calculation of covariance; a time-consuming step common in hyperspectral image processing. The goal is to deploy such a coprocessor on a remote sensing platform. The coprocessor is implemented using a Xilinx ML605 evaluation board. The hardware used includes the Xilinx Virtex-6 FPGA, DDR3 memory, and PCIe interface. An implementation to accelerate the covariance calculation was created, and the OpenCPI open source framework was adopted to enable DDR3 memory and PCIe capabilities and ease coprocessor testing. The coprocessor's performance is evaluated using several metrics: total power (Watts), processing energy (Joules), floating point operations per Watt (FLOPS/W), and floating point operations per Watt-kg (FLOPS/(W·kg)). The coprocessor is compared to a CPU-based processing platform and shown to have an overall SWaP advantage. Coprocessor FLOPS/W and FLOPS/(W·kg) performance is 2X and 2.75X that of the CPU-based platform

  13. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  14. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  15. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.

  16. An Integrated Framework Advancing Membrane Protein Modeling and Design

    PubMed Central

    Weitzner, Brian D.; Duran, Amanda M.; Tilley, Drew C.; Elazar, Assaf; Gray, Jeffrey J.

    2015-01-01

    Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167

  17. An Integrated Framework Advancing Membrane Protein Modeling and Design.

    PubMed

    Alford, Rebecca F; Koehler Leman, Julia; Weitzner, Brian D; Duran, Amanda M; Tilley, Drew C; Elazar, Assaf; Gray, Jeffrey J

    2015-09-01

    Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167

  18. Small Microprocessor for ASIC or FPGA Implementation

    NASA Technical Reports Server (NTRS)

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  19. INSTITUTIONALIZING SAFEGUARDS-BY-DESIGN: HIGH-LEVEL FRAMEWORK

    SciTech Connect

    Trond Bjornard PhD; Joseph Alexander; Robert Bean; Brian Castle; Scott DeMuth, Ph.D.; Phillip Durst; Michael Ehinger; Prof. Michael Golay, Ph.D.; Kevin Hase, Ph.D.; David J. Hebditch, DPhil; John Hockert, Ph.D.; Bruce Meppen; James Morgan; Jerry Phillips, Ph.D., PE

    2009-02-01

    participation in facility design options analysis in the conceptual design phase to enhance intrinsic features, among others. The SBD process is unlikely to be broadly applied in the absence of formal requirements to do so, or compelling evidence of its value. Neither exists today. A formal instrument to require the application of SBD is needed and would vary according to both the national and regulatory environment. Several possible approaches to implementation of the requirements within the DOE framework are explored in this report. Finally, there are numerous barriers to the implementation of SBD, including the lack of a strong safeguards culture, intellectual property concerns, the sensitive nature of safeguards information, and the potentially divergent or conflicting interests of participants in the process. In terms of SBD implementation in the United States, there are no commercial nuclear facilities that are under IAEA safeguards. Efforts to institutionalize SBD must address these issues. Specific work in FY09 could focus on the following: finalizing the proposed SBD process for use by DOE and performing a pilot application on a DOE project in the planning phase; developing regulatory options for mandating SBD; further development of safeguards-related design guidance, principles and requirements; development of a specific SBD process tailored to the NRC environment; and development of an engagement strategy for the IAEA and other international partners.

  20. Analysis and System Design Framework for Infrared Spatial Heterodyne Spectrometers

    SciTech Connect

    Cooke, B.J.; Smith, B.W.; Laubscher, B.E.; Villeneuve, P.V.; Briles, S.D.

    1999-04-05

    The authors present a preliminary analysis and design framework developed for the evaluation and optimization of infrared, Imaging Spatial Heterodyne Spectrometer (SHS) electro-optic systems. Commensurate with conventional interferometric spectrometers, SHS modeling requires an integrated analysis environment for rigorous evaluation of system error propagation due to detection process, detection noise, system motion, retrieval algorithm and calibration algorithm. The analysis tools provide for optimization of critical system parameters and components including : (1) optical aperture, f-number, and spectral transmission, (2) SHS interferometer grating and Littrow parameters, and (3) image plane requirements as well as cold shield, optical filtering, and focal-plane dimensions, pixel dimensions and quantum efficiency, (4) SHS spatial and temporal sampling parameters, and (5) retrieval and calibration algorithm issues.

  1. Covalent organic frameworks (COFs): from design to applications.

    PubMed

    Ding, San-Yuan; Wang, Wei

    2013-01-21

    Covalent organic frameworks (COFs) represent an exciting new type of porous organic materials, which are ingeniously constructed with organic building units via strong covalent bonds. The well-defined crystalline porous structures together with tailored functionalities have offered the COF materials superior potential in diverse applications, such as gas storage, adsorption, optoelectricity, and catalysis. Since the seminal work of Yaghi and co-workers in 2005, the rapid development in this research area has attracted intensive interest from researchers with diverse expertise. This critical review describes the state-of-the-art development in the design, synthesis, characterisation, and application of the crystalline porous COF materials. Our own opinions on further development of the COF materials are also presented for discussion (155 references). PMID:23060270

  2. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  3. A Novel Modeling Framework for Heterogeneous Catalyst Design

    NASA Astrophysics Data System (ADS)

    Katare, Santhoji; Bhan, Aditya; Caruthers, James; Delgass, Nicholas; Lauterbach, Jochen; Venkatasubramanian, Venkat

    2002-03-01

    A systems-oriented, integrated knowledge architecture that enables the use of data from High Throughput Experiments (HTE) for catalyst design is being developed. Higher-level critical reasoning is required to extract information efficiently from the increasingly available HTE data and to develop predictive models that can be used for design purposes. Towards this objective, we have developed a framework that aids the catalyst designer in negotiating the data and model complexities. Traditional kinetic and statistical tools have been systematically implemented and novel artificial intelligence tools have been developed and integrated to speed up the process of modeling catalytic reactions. Multiple nonlinear models that describe CO oxidation on supported metals have been screened using qualitative and quantitative features based optimization ideas. Physical constraints of the system have been used to select the optimum model parameters from the multiple solutions to the parameter estimation problem. Preliminary results about the selection of catalyst descriptors that match a target performance and the use of HTE data for refining fundamentals based models will be discussed.

  4. Framework for Implementing Engineering Senior Design Capstone Courses and Design Clinics

    ERIC Educational Resources Information Center

    Franchetti, Matthew; Hefzy, Mohamed Samir; Pourazady, Mehdi; Smallman, Christine

    2012-01-01

    Senior design capstone projects for engineering students are essential components of an undergraduate program that enhances communication, teamwork, and problem solving skills. Capstone projects with industry are well established in management, but not as heavily utilized in engineering. This paper outlines a general framework that can be used by…

  5. Architectural Design and the Learning Environment: A Framework for School Design Research

    ERIC Educational Resources Information Center

    Gislason, Neil

    2010-01-01

    This article develops a theoretical framework for studying how instructional space, teaching and learning are related in practice. It is argued that a school's physical design can contribute to the quality of the learning environment, but several non-architectural factors also determine how well a given facility serves as a setting for teaching…

  6. FPNA: interaction between FPGA and neural computation.

    PubMed

    Girau, B

    2000-06-01

    Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be directly handled by digital hardware devices. More particularly, several works show that programmable digital hardware is a real opportunity for flexible hardware implementations of neural networks. And yet many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. Therefore neural network hardware implementations need to reconcile simple hardware topologies with complex neural architectures. The theoretical and practical framework developed, allows this combination thanks to some principles of configurable hardware that are applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. This paper shows how FPGAs have led to the definition of the FPNA computation paradigm. Then it shows how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs. PMID:11011795

  7. A Multi-Gigabit Parallel Demodulator and Its FPGA Implementation

    NASA Astrophysics Data System (ADS)

    Lin, Changxing; Zhang, Jian; Shao, Beibei

    This letter presents the architecture of multi-gigabit parallel demodulator suitable for demodulating high order QAM modulated signal and easy to implement on FPGA platform. The parallel architecture is based on frequency domain implementation of matched filter and timing phase correction. Parallel FIFO based delete-keep algorithm is proposed for timing synchronization, while a kind of reduced constellation phase-frequency detector based parallel decision feedback PLL is designed for carrier synchronization. A fully pipelined parallel adaptive blind equalization algorithm is also proposed. Their parallel implementation structures suitable for FPGA platform are investigated. Besides, in the demonstration of 2Gbps demodulator for 16QAM modulation, the architecture is implemented and validated on a Xilinx V6 FPGA platform with performance loss less than 2dB.

  8. A low-power wave union TDC implemented in FPGA

    SciTech Connect

    Wu, Jinyuan; Shi, Yanchen; Zhu, Douglas; /Illinois Math. Sci. Acad.

    2011-10-01

    A low-power time-to-digital convertor (TDC) for an application inside a vacuum has been implemented based on the Wave Union TDC scheme in a low-cost field programmable gate array (FPGA) device. Bench top tests have shown that a time measurement resolution better than 30 ps (standard deviation of time differences between two channels) is achieved. Special firmware design practices are taken to reduce power consumption. The measurements indicate that with 32 channels fitting in the FPGA device, the power consumption on the FPGA core voltage is approximately 9.3 mW/channel and the total power consumption including both core and I/O banks is less than 27 mW/channel.

  9. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  10. Reusable rocket engine intelligent control system framework design, phase 2

    NASA Technical Reports Server (NTRS)

    Nemeth, ED; Anderson, Ron; Ols, Joe; Olsasky, Mark

    1991-01-01

    Elements of an advanced functional framework for reusable rocket engine propulsion system control are presented for the Space Shuttle Main Engine (SSME) demonstration case. Functional elements of the baseline functional framework are defined in detail. The SSME failure modes are evaluated and specific failure modes identified for inclusion in the advanced functional framework diagnostic system. Active control of the SSME start transient is investigated, leading to the identification of a promising approach to mitigating start transient excursions. Key elements of the functional framework are simulated and demonstration cases are provided. Finally, the advanced function framework for control of reusable rocket engines is presented.

  11. FPGA based fast synchronous serial multi-wire links synchronization

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2013-10-01

    The paper debates synchronization method of multi-wire, serial link of constant latency, by means of pseudo-random numbers generators. The solution was designed for various families of FPGA circuits. There were debated synchronization algorithm and functional structure of parameterized transmitter and receiver modules. The modules were realized in VHDL language in a behavioral form.

  12. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  13. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  14. A Hierarchical Biology Concept Framework: A Tool for Course Design

    PubMed Central

    Khodor, Julia; Halme, Dina Gould; Walker, Graham C.

    2004-01-01

    A typical undergraduate biology curriculum covers a very large number of concepts and details. We describe the development of a Biology Concept Framework (BCF) as a possible way to organize this material to enhance teaching and learning. Our BCF is hierarchical, places details in context, nests related concepts, and articulates concepts that are inherently obvious to experts but often difficult for novices to grasp. Our BCF is also cross-referenced, highlighting interconnections between concepts. We have found our BCF to be a versatile tool for design, evaluation, and revision of course goals and materials. There has been a call for creating Biology Concept Inventories, multiple-choice exams that test important biology concepts, analogous to those in physics, astronomy, and chemistry. We argue that the community of researchers and educators must first reach consensus about not only what concepts are important to test, but also how the concepts should be organized and how that organization might influence teaching and learning. We think that our BCF can serve as a catalyst for community-wide discussion on organizing the vast number of concepts in biology, as a model for others to formulate their own BCFs and as a contribution toward the creation of a comprehensive BCF. PMID:15257339

  15. Rethinking modeling framework design: object modeling system 3.0

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Object Modeling System (OMS) is a framework for environmental model development, data provisioning, testing, validation, and deployment. It provides a bridge for transferring technology from the research organization to the program delivery agency. The framework provides a consistent and efficie...

  16. Designing a Workable Framework for Evaluating Distance Language Instruction

    ERIC Educational Resources Information Center

    Madyarov, Irshat

    2009-01-01

    Teaching foreign languages at distance is now becoming widespread; so is the need for evaluating online language courses. This article discusses an example of a framework that was applied to evaluate an online English as a foreign language (EFL) course at a Middle Eastern university. The development of the framework investigated areas of interest…

  17. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach. PMID:19163797

  18. The FPGA realization of a real-time Bayer image restoration algorithm with better performance

    NASA Astrophysics Data System (ADS)

    Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu

    2014-11-01

    Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.

  19. MicroBlaze implementation of GPS/INS integrated system on Virtex-6 FPGA.

    PubMed

    Bhogadi, Lokeswara Rao; Gottapu, Sasi Bhushana Rao; Konala, Vvs Reddy

    2015-01-01

    The emphasis of this paper is on MicroBlaze implementation of GPS/INS integrated system on Virtex-6 field programmable gate array (FPGA). Issues related to accuracy of position, resource usage of FPGA in terms of slices, DSP48, block random access memory, computation time, latency and power consumption are presented. An improved design of a loosely coupled GPS/INS integrated system is described in this paper. The inertial navigation solution and Kalman filter computations are provided by the MicroBlaze on Virtex-6 FPGA. The real time processed navigation solutions are updated with a rate of 100 Hz. PMID:26543763

  20. Network-on-chip emulation framework for multimedia SoC development

    NASA Astrophysics Data System (ADS)

    Singla, Garbí; Tobajas, Félix; de Armas, Valentín.

    2013-05-01

    Current tendencies of consumer electronics have envisaged multiprocessor System-on-Chip (SoC) as a promising solution for the high performance embedding systems, and, in this scenario, Network-on-chip communication paradigm is considered as a way to improve on-chip communication efficiency. In this paper, a NoC based SoC emulation framework is designed and implemented on a low-cost FPGA device. The objective of this work is the design and implementation of a prototyping platform with NoC topology, which provides a demonstrator for the implementation of multimedia applications. The emulation platform will allow evaluation, comparison, and verification of different aspects of a NoC design for SoCs intended for the execution of multimedia applications. The proposed emulation platform consists of different type of functional IP blocks (microprocessors, memory blocks, peripherals, additional blocks, etc.) interconnected through an interconnection infrastructure based on NoC. In order to provide a low-cost solution, the platform design is restricted to use a single FPGA, resulting in a low-scale SoC due to the limited resources available in the FPGA used. However, the proposed design may be scalable and replicate in large scale FPGA or multi-FPGA devices to increase emulation performance. In this work, a design flow, which integrates different commercial EDA tools, is presented, and integration process is discussed in detail due to problems experienced in this stage. The platform is fully implemented on a Xilinx Spartan-6 LX45T FPGA and special attention is given to verification and floorplanning stages. Finally, various multimedia applications with real-time requirements are executed on the NoC-based SoC platform. At this stage, the performance results are analyzed according to the type of application, as well as the number of processors required.

  1. Unified Simulation and Analysis Framework for Deep Space Navigation Design

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan; Chuang, Jason; Olsen, Carrie

    2013-01-01

    As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.

  2. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  3. Development and Application of a Systems Engineering Framework to Support Online Course Design and Delivery

    ERIC Educational Resources Information Center

    Bozkurt, Ipek; Helm, James

    2013-01-01

    This paper develops a systems engineering-based framework to assist in the design of an online engineering course. Specifically, the purpose of the framework is to provide a structured methodology for the design, development and delivery of a fully online course, either brand new or modified from an existing face-to-face course. The main strength…

  4. Professional Development of Instructional Designers: A Proposed Framework Based on a Singapore Study

    ERIC Educational Resources Information Center

    Cheong, Eleen; Wettasinghe, Marissa C.; Murphy, James

    2006-01-01

    This article presents a professional development action plan or framework for instructional designers (IDs) working as external consultants for corporate companies. It also describes justifications why such an action plan is necessary for these professionals. The framework aims to help practising instructional designers to continuously and…

  5. Design Framework for an Adaptive MOOC Enhanced by Blended Learning: Supplementary Training and Personalized Learning for Teacher Professional Development

    ERIC Educational Resources Information Center

    Gynther, Karsten

    2016-01-01

    The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…

  6. FPGA Simulation Engine for Customized Construction of Neural Microcircuits

    PubMed Central

    Blair, Hugh T.; Cong, Jason; Wu, Di

    2014-01-01

    In this paper we describe an FPGA-based platform for high-performance and low-power simulation of neural microcircuits composed from integrate-and-fire (IAF) neurons. Based on high-level synthesis, our platform uses design templates to map hierarchies of neuron model to logic fabrics. This approach bypasses high design complexity and enables easy optimization and design space exploration. We demonstrate the benefits of our platform by simulating a variety of neural microcircuits that perform oscillatory path integration, which evidence suggests may be a critical building block of the navigation system inside a rodent’s brain. Experiments show that our FPGA simulation engine for oscillatory neural microcircuits can achieve up to 39× speedup compared to software benchmarks on commodity CPU, and 232× energy reduction compared to embedded ARM core. PMID:25584120

  7. Photoelectric radar servo control system based on ARM+FPGA

    NASA Astrophysics Data System (ADS)

    Wu, Kaixuan; Zhang, Yue; Li, Yeqiu; Dai, Qin; Yao, Jun

    2016-01-01

    In order to get smaller, faster, and more responsive requirements of the photoelectric radar servo control system. We propose a set of core ARM + FPGA architecture servo controller. Parallel processing capability of FPGA to be used for the encoder feedback data, PWM carrier modulation, A, B code decoding processing and so on; Utilizing the advantage of imaging design in ARM Embedded systems achieves high-speed implementation of the PID algorithm. After the actual experiment, the closed-loop speed of response of the system cycles up to 2000 times/s, in the case of excellent precision turntable shaft, using a PID algorithm to achieve the servo position control with the accuracy of + -1 encoder input code. Firstly, This article carry on in-depth study of the embedded servo control system hardware to determine the ARM and FPGA chip as the main chip with systems based on a pre-measured target required to achieve performance requirements, this article based on ARM chip used Samsung S3C2440 chip of ARM7 architecture , the FPGA chip is chosen xilinx's XC3S400 . ARM and FPGA communicate by using SPI bus, the advantage of using SPI bus is saving a lot of pins for easy system upgrades required thereafter. The system gets the speed datas through the photoelectric-encoder that transports the datas to the FPGA, Then the system transmits the datas through the FPGA to ARM, transforms speed datas into the corresponding position and velocity data in a timely manner, prepares the corresponding PWM wave to control motor rotation by making comparison between the position data and the velocity data setted in advance . According to the system requirements to draw the schematics of the photoelectric radar servo control system and PCB board to produce specially. Secondly, using PID algorithm to control the servo system, the datas of speed obtained from photoelectric-encoder is calculated position data and speed data via high-speed digital PID algorithm and coordinate models. Finally, a

  8. FPGA implementation of VXIbus interface hardware.

    PubMed

    Mehta, K; Rajesh, V A; Veeraswamy, S

    1993-01-01

    The HP E1399A development card is a B-size, register based device that can be used to simplify the development of simple, custom VXIbus instruments. The E1399A provides interface logic that buffers a 16-bit bidirectional data bus and performs other functions required by the VXIbus standard. However, the amount of interface logic required is high enough to substantially reduce the breadboard area that is available to the user. This paper reports on evaluation of field programmable gate array (FPGA) technology to the implementation of the VXIbus interface circuitry. Using FPGAs (Xilinx), all the logic of the E1399A can be fit into at most two low cost gate array packages with an attendant savings in board space. This results in a reliable design that provides the interface between the VXIbus and the user's custom circuitry. PMID:8329634

  9. The 3C3R Model: A Conceptual Framework for Designing Problems in PBL

    ERIC Educational Resources Information Center

    Hung, Woei

    2006-01-01

    Well-designed problems are crucial for the success of problem-based learning (PBL). Previous discussions about designing problems for PBL have been rather general and inadequate in guiding educators and practitioners to design effective PBL problems. This paper introduces the 3C3R PBL problem design model as a conceptual framework for…

  10. A design thinking framework for healthcare management and innovation.

    PubMed

    Roberts, Jess P; Fisher, Thomas R; Trowbridge, Matthew J; Bent, Christine

    2016-03-01

    The business community has learned the value of design thinking as a way to innovate in addressing people's needs--and health systems could benefit enormously from doing the same. This paper lays out how design thinking applies to healthcare challenges and how systems might utilize this proven and accessible problem-solving process. We show how design thinking can foster new approaches to complex and persistent healthcare problems through human-centered research, collective and diverse teamwork and rapid prototyping. We introduce the core elements of design thinking for a healthcare audience and show how it can supplement current healthcare management, innovation and practice. PMID:27001093

  11. A general observatory control software framework design for existing small and mid-size telescopes

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Lu, Xiao-Meng; Jiang, Xiao-Jun

    2015-07-01

    A general framework for observatory control software would help to improve the efficiency of observation and operation of telescopes, and would also be advantageous for remote and joint observations. We describe a general framework for observatory control software, which considers principles of flexibility and inheritance to meet the expectations from observers and technical personnel. This framework includes observation scheduling, device control and data storage. The design is based on a finite state machine that controls the whole process.

  12. An Exposition of Current Mobile Learning Design Guidelines and Frameworks

    ERIC Educational Resources Information Center

    Teall, Ed; Wang, Minjuan; Callaghan, Vic; Ng, Jason W. P.

    2014-01-01

    As mobile devices with wireless access become more readily available, learning delivered via mobile devices of all types must be designed to ensure successful learning. This paper first examines three questions related to the design of mobile learning: 1) what mobile learning (m-learning) guidelines can be identified in the current literature, 2)…

  13. A Framework for Web 2.0 Learning Design

    ERIC Educational Resources Information Center

    Bower, Matt; Hedberg, John G.; Kuswara, Andreas

    2010-01-01

    This paper describes an approach to conceptualising and performing Web 2.0-enabled learning design. Based on the Technological, Pedagogical and Content Knowledge model of educational practice, the approach conceptualises Web 2.0 learning design by relating Anderson and Krathwohl's Taxonomy of Learning, Teaching and Assessing, and different types…

  14. Adapting the Mathematical Task Framework to Design Online Didactic Objects

    ERIC Educational Resources Information Center

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-01-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where…

  15. Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework

    PubMed Central

    Peeples, Malinda M; Anthony Kouyaté, Robin C

    2016-01-01

    Background Mobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes. Objective The purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease. Methods The process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described. Results The result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management. Conclusions The framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to

  16. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  17. Adapting the mathematical task framework to design online didactic objects

    NASA Astrophysics Data System (ADS)

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-06-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where 'discussions' are broadly defined as the conversations students have with themselves as they interact with the dynamic mathematical representations on the screen. Eighty-four pre-service elementary teachers enrolled in hybrid mathematics courses were asked to interact with a series of applets designed to support their understanding of qualitative graphing. The results of the surveys indicate that various design features of the applets did in fact cause perturbations and opportunities for resolutions that enabled the users to 'discuss' their learning by reflecting on their in-class discussions and online activities. The discussion includes four design features for guiding future applet creation.

  18. Optimal aeroacoustic shape design using the surrogate management framework

    NASA Astrophysics Data System (ADS)

    Marsden, Alison; Wang, Meng; Dennis, John E., Jr.; Moin, Parviz

    2003-11-01

    Shape optimization is applied in conjunction with time-dependent Navier-Stokes simulations to minimize airfoil trailing-edge noise. Optimization is performed using the surrogate management framework (SMF) (Booker phet al., J. Struct. Opt. 1999), a non-gradient based pattern search method chosen for its efficiency and rigorous convergence properties. Using SMF, optimization is performed not on the expensive actual function but on an inexpensive surrogate function. The use of a polling step in the SMF guarantees convergence to a local minimum of the cost function on a mesh. Results are presented for cases with several shape parameters, using a model problem with unsteady laminar flow past an acoustically compact airfoil. Constraints on lift and drag are applied using a penalty function within the framework of the filtering method of Audet and Dennis (Rice Univ. TR 00-09), an extension of the SMF method. Significant reduction (as much as much as 80%) in acoustic power has been demonstrated in all cases with reasonable computational cost.

  19. Wire Position Monitoring with FPGA based Electronics

    SciTech Connect

    Eddy, N.; Lysenko, O.; /Fermilab

    2009-01-01

    This fall the first Tesla-style cryomodule cooldown test is being performed at Fermilab. Instrumentation department is preparing the electronics to handle the data from a set of wire position monitors (WPMs). For simulation purposes a prototype pipe with a WMP has been developed and built. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The wire is stretched along the pipe with a tensioning load of 9.07 kg. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. FPGA based digitizer scans the WPM and transmits the data to a PC via VME interface. The data acquisition is based on the PC running LabView. In order to increase the accuracy and convenience of the measurements some modifications were required. The first is implementation of an average and decimation filter algorithm in the integrator operation in the FPGA. The second is the development of alternative tool for WPM measurements in the PC. The paper describes how these modifications were performed and test results of a new design. The last cryomodule generation has a single chain of seven WPMs (placed in critical positions: at each end, at the three posts and between the posts) to monitor a cold mass displacement during cooldown. The system was developed in Italy in collaboration with DESY. Similar developments have taken place at Fermilab in the frame of cryomodules construction for SCRF research. This fall preliminary cryomodule cooldown test is being performed. In order to prepare an appropriate electronic system for the test a prototype pipe with a WMP has been developed and built, figure 1. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The 0.5 mm diameter Cu wire is stretched along the pipe with a tensioning load of 9.07 kg and has a length of 1.1 m. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. An FPGA based

  20. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  1. Investigating the Reading Practices of EFL Yemeni Students Using the Learning by Design Framework

    ERIC Educational Resources Information Center

    Bhooth, Abdullah Mohammad; Azman, Hazita; Ismail, Kemboja

    2015-01-01

    This article investigates the reading practices of 45 EFL Yemeni students using the "learning by design" framework. The framework organizes the teaching and learning of literacy into four processes: experiencing, conceptualising, analysing, and applying. Quantitative and qualitative methods were used to collect data on a sample of…

  2. A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets

    ERIC Educational Resources Information Center

    Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.

    2013-01-01

    This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…

  3. A Conceptual Framework for Educational Design at Modular Level to Promote Transfer of Learning

    ERIC Educational Resources Information Center

    Botma, Yvonne; Van Rensburg, G. H.; Coetzee, I. M.; Heyns, T.

    2015-01-01

    Students bridge the theory-practice gap when they apply in practice what they have learned in class. A conceptual framework was developed that can serve as foundation to design for learning transfer at modular level. The framework is based on an adopted and adapted systemic model of transfer of learning, existing learning theories, constructive…

  4. A Framework for the Design and Integration of Collaborative Classroom Games

    ERIC Educational Resources Information Center

    Echeverria, Alejandro; Garcia-Campo, Cristian; Nussbaum, Miguel; Gil, Francisca; Villalta, Marco; Amestica, Matias; Echeverria, Sebastian

    2011-01-01

    The progress registered in the use of video games as educational tools has not yet been successfully transferred to the classroom. In an attempt to close this gap, a framework was developed that assists in the design and classroom integration of educational games. The framework addresses both the educational dimension and the ludic dimension. The…

  5. "Light Green Doesn't Mean Hydrology!": Toward a Visual-Rhetorical Framework for Interface Design.

    ERIC Educational Resources Information Center

    Spinuzzi, Clay

    2001-01-01

    Examines metaphor's limitations as a visual-rhetorical framework for designing, evaluating, and critiquing user interfaces. Outlines an alternate framework for visual rhetoric, that of genre ecologies, and discusses how it avoids some of the limitations of metaphor. Uses an empirical study of computer users to illustrate the genre-ecology…

  6. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    ERIC Educational Resources Information Center

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-01-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a…

  7. Adventure Learning and Learner-Engagement: Frameworks for Designers and Educators

    ERIC Educational Resources Information Center

    Henrickson, Jeni; Doering, Aaron

    2013-01-01

    There is a recognized need for theoretical frameworks that can guide designers and educators in the development of engagement-rich learning experiences that incorporate emerging technologies in pedagogically sound ways. This study investigated one such promising framework, adventure learning (AL). Data were gathered via surveys, interviews, direct…

  8. A sampling design framework for monitoring secretive marshbirds

    USGS Publications Warehouse

    Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.

    2009-01-01

    A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.

  9. Bioinspired Design of Ultrathin 2D Bimetallic Metal-Organic-Framework Nanosheets Used as Biomimetic Enzymes.

    PubMed

    Wang, Yixian; Zhao, Meiting; Ping, Jianfeng; Chen, Bo; Cao, Xiehong; Huang, Ying; Tan, Chaoliang; Ma, Qinglang; Wu, Shixin; Yu, Yifu; Lu, Qipeng; Chen, Junze; Zhao, Wei; Ying, Yibin; Zhang, Hua

    2016-06-01

    With the bioinspired design of organic ligands and metallic nodes, novel ultrathin 2D bimetallic metal-organic-framework nanosheets are successfully synthesized, which can serve as advanced 2D biomimetic nanomaterials to mimic heme proteins. PMID:27008574

  10. Presence+Experience: A Framework for the Purposeful Design of Presence in Online Courses

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.; Verma, Geeta; Johnson, Heather Lynn

    2016-01-01

    In this article, we share a framework for the purposeful design of presence in online courses. Instead of developing something new, we looked at two models that have helped us with previous instructional design projects, providing us with some assurance that the design decisions we were making were fundamentally sound. As we began to work with the…

  11. From Concept to Software: Developing a Framework for Understanding the Process of Software Design.

    ERIC Educational Resources Information Center

    Mishra, Punyashloke; Zhao, Yong; Tan, Sophia

    1999-01-01

    Discussion of technological innovation and the process of design focuses on the design of computer software. Offers a framework for understanding the design process by examining two computer programs: FliPS, a multimedia program for learning complex problems in chemistry; and Tiger, a Web-based program for managing and publishing electronic…

  12. FPGA-core defibrillator using wavelet-fuzzy ECG arrhythmia classification.

    PubMed

    Nambakhsh, Mohammad; Tavakoli, Vahid; Sahba, Nima

    2008-01-01

    An electrocardiogram (ECG) feature extraction and classification system has been developed and evaluated using Quartus II 7.1 belong to Altera Ltd. In wavelet domain QRS complexes were detected and each complex was used to locate the peaks of the individual waves. Then, fuzzy classifier block used these features to classify ECG beats. Three types of arrhythmias and abnormalities were detected using the procedure. The completed algorithm was embedded into Field Programmable Gate Array (FPGA). The completed prototype was tested through software-generated signals, in which test scenarios covering several kinds of ECG signals on MIT-BIH Database. For the purpose of feeding signals into the FPGA, a software was designed to read signal files and import them to the LPT port of computer that was connected to FPGA. From the results, it was achieved that the proposed prototype could do real time monitoring of ECG signal for arrhythmia detection. We also implemented algorithm in a sequential structure device like AVR microcontroller with 16 MHZ clock for the same purpose. External clock of FPGA is 50 MHZ and by utilizing of Phase Lock Loop (PLL) component inside device, it was possible to increase the clock up to 1.2 GHZ in internal blocks. Final results compare speed and cost of resource usage in both devices. It shows that in cost of more resource usage, FPGA provides higher speed of computation; because FPGA makes the algorithm able to compute most parts in parallel manner. PMID:19163255

  13. A Framework for Promoting Learning in IS Design and Implementation

    ERIC Educational Resources Information Center

    Small, Adrian; Sice, Petia; Venus, Tony

    2008-01-01

    Purpose: The purpose of this paper is to set out an argument for a way to design, implement and manage IS with an emphasis on first, the learning that can be created through undertaking the approach, and second, the learning that may be created through using the IS that was implemented. The paper proposes joining two areas of research namely,…

  14. Science Curriculum Design: Views from a Psychological Framework.

    ERIC Educational Resources Information Center

    Linn, Marcia C.

    It is now almost universally acknowledged that science education must be rejuvenated to serve the needs of American society. An emerging science of science education based on recent advances in psychological research could make this rejuvenation dramatic. Four aspects of psychological research relevant to science curriculum design are discussed:…

  15. An FPGA architecture for MPEG-2 TS demultiplexer

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2012-05-01

    This paper presents a novel architecture of a MPEG-2 TS demultiplexer, implemented with a FPGA. The main objective of the design is an ability to separate selected elementary streams in real time, while ensuring minimal resource consumption. This is achieved by the decomposition of the demultiplexer into a number of independent sub-modules, which process the data in parallel. The flexible structure enables adaptation to the specific needs and significantly simplifies potential expansion, what may be important due to a wide range of potential applications of the MPEG-2 TS standard. To improve the functionality, the demultiplexer is equipped with a configuration and status interface. The transport stream and configuration data are supplied to the FPGA by a microcontroller through the External Peripheral Interface (EPI). The data is transmitted to the microcontroller via Ethernet, using the User Datagram Protocol (UDP).

  16. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. PMID:21752597

  17. Design and Implementation of Telemedicine based on Java Media Framework

    NASA Astrophysics Data System (ADS)

    Xiong, Fengguang; Jia, Zhiyan

    According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.

  18. The Modern Design of Experiments: A Technical and Marketing Framework

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.

  19. Support for development of a custom VLSI and FPGA logic chips based on a VHDL top-down design approach. Final report

    SciTech Connect

    Not Available

    1994-06-01

    The objective of this contract was to perform the beginning stages of development for two Application Specific integrated Circuits: CMOS-1 and CMOS-2D. This work includes specification writing, behavioral modeling, and beginning design. In addition, the design work is required to be done in the VHSIC Hardware Description Language (VHDL). InnovASIC, Inc. completed all the tasks required of this contract. The specifications were written, VHDL for CMOS-1 was completed, a behavioral model of CMOS-2D was written, and a system simulation was performed.

  20. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  1. Knowledge-based design-flow management in the OOTIF framework

    NASA Astrophysics Data System (ADS)

    Li, Sikun; Guo, Yang; Zhao, Li Y.

    1996-03-01

    This paper introduces the main techniques adopted by design flow management subsystem in the OOTIF Framework. OOTIF is an object oriented tool integrate CAD framework. In this paper, we present static model of design process based on flowchart; we implement tool control through tool template which makes use of object oriented concept; we develop design flowchart builder through which designers can express design intent when submitting tasks; knowledge base is built, which makes intelligent tool selection and flow management possible. Some ideas of future developments of OOTIF are also presented. With the help of design flow management system, designers are able to concentrate exclusively on those issues concerned with the creative and exploratory phases of design.

  2. The azaindole framework in the design of kinase inhibitors.

    PubMed

    Mérour, Jean-Yves; Buron, Frédéric; Plé, Karen; Bonnet, Pascal; Routier, Sylvain

    2014-01-01

    This review article illustrates the growing use of azaindole derivatives as kinase inhibitors and their contribution to drug discovery and innovation. The different protein kinases which have served as targets and the known molecules which have emerged from medicinal chemistry and Fragment-Based Drug Discovery (FBDD) programs are presented. The various synthetic routes used to access these compounds and the chemical pathways leading to their synthesis are also discussed. An analysis of their mode of binding based on X-ray crystallography data gives structural insights for the design of more potent and selective inhibitors. PMID:25460315

  3. Economical Implementation of a Filter Engine in an FPGA

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    2009-01-01

    A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be

  4. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    NASA Astrophysics Data System (ADS)

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; Lebeau, James M.; Rajan, Krishna

    2015-12-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  5. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    PubMed Central

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; LeBeau, James M.; Rajan, Krishna

    2015-01-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives. PMID:26681142

  6. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design.

    PubMed

    Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna

    2015-01-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives. PMID:26681142

  7. A design framework for teleoperators with kinesthetic feedback

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake

    1989-01-01

    The application of a hybrid two-port model to teleoperators with force and velocity sensing at the master and slave is presented. The interfaces between human operator and master, and between environment and slave, are ports through which the teleoperator is designed to exchange energy between the operator and the environment. By computing or measuring the input-output properties of this two-port network, the hybrid two-port model of an actual or simulated teleoperator system can be obtained. It is shown that the hybrid model (as opposed to other two-port forms) leads to an intuitive representation of ideal teleoperator performace and applies to several teleoperator architectures. Thus measured values of the h matrix or values computed from a simulation can be used to compare performance with th ideal. The frequency-dependent h matrix is computed from a detailed SPICE model of an actual system, and the method is applied to a proposed architecture.

  8. Analyzing a College Course That Adheres to the Universal Design for Learning (UDL) Framework

    ERIC Educational Resources Information Center

    Smith, Frances G.

    2012-01-01

    Universal design for learning (UDL) offers an educational framework for a college instructor that can maximize the design and delivery of course instruction by emphasizing multiple representations of materials, varied means for student expression, content and knowledge, and multiple ways to motivate and engage student learning. Through a UDL lens,…

  9. Facilitating Organizational Information Access in Global Network Environments: Towards a New Framework for Intranet Design.

    ERIC Educational Resources Information Center

    Detlor, Brian

    This paper proposes a user-centered framework for intranet design that is based on an understanding of people, their typical problems, information behaviors, and situated contexts. It is argued that by adopting such an approach, intranets can be designed which facilitate organizational information access and use. The first section of the paper…

  10. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    ERIC Educational Resources Information Center

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  11. Designing Online Management Education Courses Using the Community of Inquiry Framework

    ERIC Educational Resources Information Center

    Weyant, Lee E.

    2013-01-01

    Online learning has grown as a program delivery option for many colleges and programs of business. The Community of Inquiry (CoI) framework consisting of three interrelated elements--social presence, cognitive presence, and teaching presences--provides a model to guide business faculty in their online course design. The course design of an online…

  12. The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.

    ERIC Educational Resources Information Center

    New York Association of Training and Employment Professionals, Albany.

    This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…

  13. Framework for Organization and Control of Capstone Design/Build Projects

    ERIC Educational Resources Information Center

    Massie, Darrell D.; Massie, Cheryl A.

    2006-01-01

    Senior design capstone projects frequently require team members to self-organize for a project and then execute the design/build portion with limited resources. This is challenging for inexperienced students who struggle with technical as well as program management and team building issues. This paper outlines a general framework that can be used…

  14. Serious Games for Higher Education: A Framework for Reducing Design Complexity

    ERIC Educational Resources Information Center

    Westera, W.; Nadolski, R. J.; Hummel, H. G. K.; Wopereis, I. G. J. H.

    2008-01-01

    Serious games open up many new opportunities for complex skills learning in higher education. The inherent complexity of such games, though, requires large efforts for their development. This paper presents a framework for serious game design, which aims to reduce the design complexity at conceptual, technical and practical levels. The approach…

  15. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-07-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a professional development programme intended to empower teachers to design context-based chemistry education. The programme consists of teaching a pre-developed context-based unit, followed by teachers designing an outline of a new context-based unit. The study investigates the process of teacher empowerment during the implementation of the programme. Data were obtained from meetings, classroom discussions and observations. The findings indicated that teachers became empowered to design new context-based units provided they had sufficient time and resources. The contribution of the framework to teacher empowerment is discussed.

  16. RIPOSTE: a framework for improving the design and analysis of laboratory-based research.

    PubMed

    Masca, Nicholas Gd; Hensor, Elizabeth Ma; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam Ka; Teare, M Dawn

    2015-01-01

    Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results. PMID:25951517

  17. Neural harmonic detection approaches for FPGA area efficient implementation

    NASA Astrophysics Data System (ADS)

    Dzondé, S. R. N.; Kom, C.-H.; Berviller, H.; Blondé, J.-P.; Flieller, D.; Kom, M.; Braun, F.

    2011-12-01

    This paper deals with new neural networks based harmonics detection approaches to minimize hardware resources needed for FPGA implementation. A simple type of neural network called Adaline is used to build an intelligent Active Power Filter control unit for harmonics current elimination and reactive power compensation. For this purpose, two different approaches called Improved Three-Monophase (ITM) and Two-Phase Flow (TPF) methods are proposed. The ITM method corresponds to a simplified structure of the three-monophase method whereas the TPF method derives from the Synchronous Reference Frame method. Indeed, for both proposed methods, only 50% of Adalines with regard to the original methods is used. The corresponding designs were implemented on a FPGA Stratix II platform through Altera DSP Builder® development tool. After analyzing those two methods with respect to performance and size criteria, a comparative study with the popular p-q and also the direct method is reported. From there, one can notice that the p-q is still the most powerful method for three-phase compensation but the TPF method is the fastest and the most compact in terms of size. An experimental result is shown to validate the feasibility of FPGA implementation of ANN-based harmonics extraction algorithms.

  18. FPGA-based architecture for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Rosário, João.; Nascimento, José M. P.; Véstias, Mário

    2014-10-01

    Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember's signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.

  19. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. PMID:21435775

  20. Design theoretic analysis of three system modeling frameworks.

    SciTech Connect

    McDonald, Michael James

    2007-05-01

    This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.

  1. Designing smart analytical data services for a personal health framework.

    PubMed

    Koumakis, Lefteris; Kondylakis, Haridimos; Chatzimina, Maria; Iatraki, Galatia; Argyropaidas, Panagiotis; Kazantzaki, Eleni; Tsiknakis, Manolis; Kiefer, Stephan; Marias, Kostas

    2016-01-01

    Information in the healthcare domain and in particular personal health record information is heterogeneous by nature. Clinical, lifestyle, environmental data and personal preferences are stored and managed within such platforms. As a result, significant information from such diverse data is difficult to be delivered, especially to non-IT users like patients, physicians or managers. Another issue related to the management and analysis is the volume, which increases more and more making the need for efficient data visualization and analysis methods mandatory. The objective of this work is to present the architectural design for seamless integration and intelligent analysis of distributed and heterogeneous clinical information in the PHR context, as a result of a requirements elicitation process in iManageCancer project. This systemic approach aims to assist health-care professionals to orient themselves in the disperse information space and enhance their decision-making capabilities, to encourage patients to have an active role by managing their health information and interacting with health-care professionals. PMID:27225566

  2. a Novel Framework for Incorporating Sustainability Into Biomass Feedstock Design

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, G.; Negri, C.

    2012-12-01

    There is a strong society need to evaluate and understand the sustainability of biofuels, especially due to the significant increases in production mandated by many countries, including the United States. Biomass feedstock production is an important contributor to environmental, social and economic impacts from biofuels. We present a systems approach where the agricultural, urban, energy and environmental sectors are considered as components of a single system and environmental liabilities are used as recoverable resources for biomass feedstock production. A geospatial analysis evaluating marginal land and degraded water resources to improve feedstock productivity with concomitant environmental restoration was conducted for the major corn producing states in the US. The extent and availability of these resources was assessed and geospatial techniques used to identify promising opportunities to implement this approach. Utilizing different sources of marginal land (roadway buffers, contaminated land) could result in a 7-fold increase in land availability for feedstock production and provide ecosystem services such as water quality improvement and carbon sequestration. Spatial overlap between degraded water and marginal land resources was found to be as high as 98% and could maintain sustainable feedstock production on marginal lands through the supply of water and nutrients. Multi-objective optimization was used to quantify the tradeoffs between net revenue, improvements in water quality and carbon sequestration at the farm scale using this design. Results indicated that there is an initial opportunity where land that is marginally productive for row crops and of marginal value for conservation purposes could be used to grow bioenergy crops such that that water quality and carbon sequestration benefits are obtained.

  3. Network architecture of storage extension next generation SONET/SDH-based and GFP interface design of SONET/SDH with FPGA

    NASA Astrophysics Data System (ADS)

    Qin, Leihua; Zeng, Dong; Liu, Gang; Jiang, Minghua

    2005-11-01

    As storage environments and storage area networks (SANs) grow, enterprises increasingly have the need to extend data transfers beyond the confines of the enterprise over longer distances such as metropolitan area networks (MANs) and wide area networks (WANs) for disaster-recovery and business-continuity applications. By using virtual concatenation (VCAT), link capacity adjustment scheme (LCAS) and generic frame procedure(GFP), Next generation SONET/SDH can move SCSI commands and block-level data over long distances in an efficient and cost-effective manner. This paper analyses the limitation of traditional SONET/SDH for storage services and the new characteristics of Next generation SONET/SDH. The design approach and steps of GFP interface based SOPC are proposed, furthermore the architecture of SAN extension based on Next generation SONET/SDH is presented.

  4. Rad-Hard/HI-REL FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian E.; McGowan, John E.; Katz, Richard B.

    1997-01-01

    The goals for a radiation hardened (RAD-HARD) and high reliability (HI-REL) field programmable gate array (FPGA) are described. The first qualified manufacturer list (QML) radiation hardened RH1280 and RH1020 were developed. The total radiation dose and single event effects observed on the antifuse FPGA RH1280 are reported on. Tradeoffs and the limitations in the single event upset hardening are discussed.

  5. Tethered Forth system for FPGA applications

    NASA Astrophysics Data System (ADS)

    Goździkowski, Paweł; Zabołotny, Wojciech M.

    2013-10-01

    This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.

  6. Printed Circuit Board Design (PCB) with HDL Designer

    NASA Technical Reports Server (NTRS)

    Winkert, Thomas K.; LaFourcade, Teresa

    2004-01-01

    Contents include the following: PCB design with HDL designer, design process and schematic capture - symbols and diagrams: 1. Motivation: time savings, money savings, simplicity. 2. Approach: use single tool PCB for FPGA design, more FPGA designs than PCB designers. 3. Use HDL designer for schematic capture.

  7. A framework for analyzing interdisciplinary tasks: implications for student learning and curricular design.

    PubMed

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D; Turpen, Chandra

    2013-06-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives. PMID:23737627

  8. Effect of framework design on fracture resistance of zirconium oxide posterior fixed partial dentures

    PubMed Central

    Salimi, Hadi; Mosharraf, Ramin; Savabi, Omid

    2012-01-01

    Introduction: The effect of framework design modifications in all-ceramic systems is not fully understood. The aim of this investigation was to evaluate the effect of different framework designs on fracture resistance of zirconium oxide posterior fixed partial dentures (FPD). Materials and Methods: Thirty two posterior zirconia FPD cores were manufactured to replace a second premolar. The specimens were divided into four groups; I: 3 × 3 connector and standard design, II: 3 × 3 connector and modified design, III: 4 × 4 connector dimension, and standard design and IV: 4 × 4 connector dimension and modified design. After storing for one week in artificial saliva and thermocycling (2000 cycles, 5-55°C), the specimens were loaded in a universal testing machine at a constant cross-head speed of 0.5 mm/min until failure occurred. The Weibull, Kruskal-Wallis, and Mann-Whitney tests were used for statistical analysis (α = 0.05). Results: The mean fracture resistance of groups with 4 × 4 mm connector was significantly higher than groups with 3 × 3 mm connector (P < 0.001). Although, the fracture resistance of the modified frameworks was increased in the present study (1.1 times), they were not significantly different from anatomic specimens (P = 0.327). Conclusions: The fracture resistance of the zirconia posterior-fixed partial dentures was significantly affected by the connector size; it was not affected by the framework modification. PMID:23559956

  9. A Framework for Analyzing Interdisciplinary Tasks: Implications for Student Learning and Curricular Design

    PubMed Central

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra

    2013-01-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives. PMID:23737627

  10. Study, design and integration of an FPGA-based system for the time-of-flight calculation applied to PET equipment

    NASA Astrophysics Data System (ADS)

    Aguilar Talens, D. Albert

    , the initial time measurement results are presented, achieving time resolutions below 100 ps for multiple channels. Once characterized, the system is tested with a breast PET prototype, whose technology detectors are based on Position Sensitive PhotoMultiplier Tubes (PSPMTs), performing TOF measurements for different scenarios. After this point, tests based on two Silicon Photomultipliers (SiPMs) modules were carried out. SiPMs are immune to magnetic fields, among other advantages. This is an important feature since there is a significant interest in combining PET and Magnetic Resonances (MR). Each of the two detector modules used are composed of a single crystal pixel. The electronic conditioning circuits are designed, taking into account the most influential parameters in time resolution. After these results, an array of 144 SiPMs is tested, optimizing several parameters, which directly impact on the system performance. Having demonstrated the system capabilities, an optimization process is devised. On the one hand, TDC measurements are enhanced up to 40 ps of precision. On the other hand, a coincidence algorithm is developed, which is responsible of identifying detector pairs that have registered an event within certain time window. Finally, the Thesis conclusions and the future work are presented, followed by the references. A list of publications and attended congresses are also provided.

  11. A Multi-Alphabet Arithmetic Coding Hardware Implementation for Small FPGA Devices

    NASA Astrophysics Data System (ADS)

    Biasizzo, Anton; Novak, Franc; Korošec, Peter

    2013-01-01

    Arithmetic coding is a lossless compression algorithm with variable-length source coding. It is more flexible and efficient than the well-known Huffman coding. In this paper we present a non-adaptive FPGA implementation of a multi-alphabet arithmetic coding with separated statistical model of the data source. The alphabet of the data source is a 256-symbol ASCII character set and does not include the special end-of-file symbol. No context switching is used in the proposed design which gives maximal throughput without pipelining. We have synthesized the design for Xilinx FPGA devices and used their built-in hardware resources.

  12. Integrated Information Framework for Intelligent Cooperative Design Based on Multi-Agent System and XML

    NASA Astrophysics Data System (ADS)

    Yan, Cao; Lina, Yang; Yanli, Yang; Hua, Chen

    2008-11-01

    To meet the requirements of distributed cooperation in various industries, the architecture of cooperative design based on multi-agent system on Internet is proposed by analyzing cooperative design pattern, and its key technologies, such as product developing process management, information management, and conflict resolution, are discussed. An integrated information framework of loosely coupling modules is proposed which supports concurrent work mode based on multi-agent system. Integrating Web service technique and agent technique into this framework can organize many cooperative members and their activities effectively though they are distributed at different places. Finally, system development mode based on Web is put forward.

  13. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  14. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  15. A framework for development of an intelligent system for design and manufacturing of stamping dies

    NASA Astrophysics Data System (ADS)

    Hussein, H. M. A.; Kumar, S.

    2014-07-01

    An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.

  16. Generalized framework for interatomic potential design: Application to Fe–He system

    SciTech Connect

    Tschopp, Mark A.; Solanki, K. N.; Baskes, Michael I.; Gao, Fei; Sun, Xin; Horstemeyer, Mark

    2012-06-01

    Interatomic potentials play an important role in the physics of nanoscale structures. However, while interatomic potentials are designed for a specific purpose, they are often used for studying mechanisms outside of the intended purpose. Hence, a generalized framework for interatomic potential design is designed such that it can allow a researcher to tailor an interatomic potential towards specific properties. This methodology produces an interatomic potential design map, which contains multiple interatomic potentials and is capable of exploring different nanoscale phenomena observed in experiments. This methodology is effcient and provides the means to assessing uncertainties in nanostructure properties due to the interatomic potential fitting process. As an initial example, an Fe-He interatomic potential design map is developed using this framework to show its profound effect.

  17. Improved Approach for Utilization of FPGA Technology into DAQ, DSP, and Computing Applications

    SciTech Connect

    Isenhower, Larry Donald

    2009-01-28

    Innovation Partners proposed and successfully demonstrated in this SBIR Phase I grant a software/hardware co-design approach to reduce both the difficulty and time to implement Field Programmable Gate Array (FPGA) solutions to data acquisition and specialized computational applications. FPGAs can require excessive time for programming and require specialized knowledge that will be greatly reduced by the company's solution. Not only are FPGAs ideal for DAQ and embedded solutions, they can also be the best solution to specialized signal processing to replace Digital Signal Processors (DSPs). By allowing FPGA programming to be done in C with the equivalent of a simple compilation, algorithm changes and improvements can be implemented decreasing the life-cycle costs and allow subsitution of new FPGA designs staying above the technological details.

  18. Radiometric Calibration of Mars HiRISE High Resolution Imagery Based on Fpga

    NASA Astrophysics Data System (ADS)

    Hou, Yifan; Geng, Xun; Xing, Shuai; Tang, Yonghe; Xu, Qing

    2016-06-01

    Due to the large data amount of HiRISE imagery, traditional radiometric calibration method is not able to meet the fast processing requirements. To solve this problem, a radiometric calibration system of HiRISE imagery based on field program gate array (FPGA) is designed. The montage gap between two channels caused by gray inconsistency is removed through histogram matching. The calibration system is composed of FPGA and DSP, which makes full use of the parallel processing ability of FPGA and fast computation as well as flexible control characteristic of DSP. Experimental results show that the designed system consumes less hardware resources and the real-time processing ability of radiometric calibration of HiRISE imagery is improved.

  19. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  20. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  1. A framework of cloud supported collaborative design in glass lens moulds based on aspheric measurement

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Wang, Yu; Na, Jingxin; Zhi, Yanan; Fan, Yufeng

    2013-09-01

    Aspheric mould design includes the top-down design and reversal design. In this paper, a new framework of reversal design is proposed combining with cloud supported collaborative design (CSCD) based on aspheric measurement. The framework is a kind of collaborative platform, which is composed of eight modules, including the computerized aspheric precision measurement module (CAPM), computer-aided optical design of aspheric lens system (CAOD), computer-aided design of lens mould (CADLM), FEM(finite element method) simulation of lens molding module (FEMLM), computer-aided manufacture of lens and moulds (CAMLM), measurement data analysis module (MDAM), optical product lifecycle management module (OPLM) and cloud computing network module (CCNM). In this framework, the remote clients send an improved requirement or fabrication demand about optical lens system through CCNM, which transfers this signal to OPLM. In OPLM, one main server is in charge of the task distribution and collaborative work of other six modules. The first measurement data of aspheric lens are produced by clients or our proposed platform CAPM, then are sent to CAOD for optimization and the electronic drawings of lens moulds are generated in CADLM module. According the design drawings, the FEMLM could give the lens-molding simulation parameters through FEM software. The simulation data are used for the second design of moulds in CADLM module. In this case, the moulds could be fabricated in CAMLM by ultra-precision machine, and the aspheric lens could be also produced by lens-molding machine in CAMLM. At last, the final shape of aspheric lens could be measured in CAPM and the data analysis could be conducted in MDAM module. Through the proposed framework, all the work described above could be performed coordinately. And the optimum design data of lens mould could be realized and saved, then shared by all the work team.

  2. SAD5 Stereo Correlation Line-Striping in an FPGA

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopoulos, Arin C.

    2011-01-01

    High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms

  3. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    -Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

  4. Rational design of metal-organic frameworks with anticipated porosities and functionalities

    SciTech Connect

    Zhang, MW; Bosch, M; Gentle, T; Zhou, HC

    2014-01-01

    Metal-organic frameworks have emerged as a new category of porous materials that have intriguing structures and diverse applications. Even though the early discovery of new MOFs appears to be serendipitous, much effort has been made to reveal their structure-property relationships for the purpose of rationally designing novel frameworks with expected properties. Until now, much progress has been made to rationalize the design and synthesis of MOFs. This highlight review will outline the recent advances on this topic from both our and other groups and provide a systematic overview of different methods for the rational design of MOFs with desired porosities and functionalities. In this review, we will categorize the recent efforts for rational MOF design into two different approaches: a structural approach and a functional approach.

  5. Using Learning Design as a Framework for Supporting the Design and Reuse of OER

    ERIC Educational Resources Information Center

    Conole, Grainne; Weller, Martin

    2008-01-01

    The paper will argue that adopting a learning design methodology may provide a vehicle for enabling better design and reuse of Open Educational Resources (OERs). It will describe a learning design methodology, which is being developed and implemented at the Open University in the UK. The aim is to develop a "pick and mix" learning design toolbox…

  6. Beyond a Definition: Toward a Framework for Designing and Specifying Mentoring Models

    ERIC Educational Resources Information Center

    Dawson, Phillip

    2014-01-01

    More than three decades of mentoring research has yet to converge on a unifying definition of mentoring; this is unsurprising given the diversity of relationships classified as mentoring. This article advances beyond a definition toward a common framework for specifying mentoring models. Sixteen design elements were identified from the literature…

  7. Designing Supply Chains for Sustainability by the P-graph Framework

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  8. A KBE-enabled design framework for cost/weight optimization study of aircraft composite structures

    NASA Astrophysics Data System (ADS)

    Wang, H.; La Rocca, G.; van Tooren, M. J. L.

    2014-10-01

    Traditionally, minimum weight is the objective when optimizing airframe structures. This optimization, however, does not consider the manufacturing cost which actually determines the profit of the airframe manufacturer. To this purpose, a design framework has been developed able to perform cost/weight multi-objective optimization of an aircraft component, including large topology variations of the structural configuration. The key element of the proposed framework is a dedicated knowledge based engineering (KBE) application, called multi-model generator, which enables modelling very different product configurations and variants and extract all data required to feed the weight and cost estimation modules, in a fully automated fashion. The weight estimation method developed in this research work uses Finite Element Analysis to calculate the internal stresses of the structural elements and an analytical composite plate sizing method to determine their minimum required thicknesses. The manufacturing cost estimation module was developed on the basis of a cost model available in literature. The capability of the framework was successfully demonstrated by designing and optimizing the composite structure of a business jet rudder. The study case indicates the design framework is able to find the Pareto optimal set for minimum structural weight and manufacturing costin a very quick way. Based on the Pareto set, the rudder manufacturer is in conditions to conduct both internal trade-off studies between minimum weight and minimum cost solutions, as well as to offer the OEM a full set of optimized options to choose, rather than one feasible design.

  9. A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks

    ERIC Educational Resources Information Center

    Singer, Florence Mihaela; Voica, Cristian

    2013-01-01

    The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…

  10. Computer Mediated Communication in the Universal Design for Learning Framework for Preparation of Special Education Teachers

    ERIC Educational Resources Information Center

    Basham, James D.; Lowrey, K. Alisa; deNoyelles, Aimee

    2010-01-01

    This study investigated the Universal Design for Learning (UDL) framework as a basis for a bi-university computer mediated communication (CMC) collaborative project. Participants in the research included 78 students from two special education programs enrolled in teacher education courses. The focus of the investigation was on exploring the…

  11. Using the Universal Design for Learning Framework to Support Culturally Diverse Learners

    ERIC Educational Resources Information Center

    Chita-Tegmark, Meia; Gravel, Jenna W.; Serpa, Maria de Lourdes B.; Domings, Yvonne; Rose, David H.

    2012-01-01

    This article describes the mechanism through which cultural variability is a source of learning differences. The authors argue that the Universal Design for Learning can be extended to capture the way learning is influenced by cultural variability, and show how the UDL framework might be used to create a curriculum that is responsive to this…

  12. A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.

    ERIC Educational Resources Information Center

    Sproule, Susan; Archer, Norm

    2000-01-01

    Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…

  13. Designing Energy Supply Chains with the P-graph Framework under Cost Constraints and Sustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  14. Designing Multi-Channel Web Frameworks for Cultural Tourism Applications: The MUSE Case Study.

    ERIC Educational Resources Information Center

    Garzotto, Franca; Salmon, Tullio; Pigozzi, Massimiliano

    A framework for the design of multi-channel (MC) applications in the cultural tourism domain is presented. Several heterogeneous interface devices are supported including location-sensitive mobile units, on-site stationary devices, and personalized CDs that extend the on-site experience beyond the visit time thanks to personal memories gathered…

  15. Using the DSAP Framework to Guide Instructional Design and Technology Integration in BYOD Classrooms

    ERIC Educational Resources Information Center

    Wasko, Christopher W.

    2016-01-01

    The purpose of this study was to determine the suitability of the DSAP Framework to guide instructional design and technology integration for teachers piloting a BYOD (Bring Your Own Device) initiative and to measure the impact the initiative had on the amount and type of technology used in pilot classrooms. Quantitative and qualitative data were…

  16. Prospective Secondary Teachers Repositioning by Designing, Implementing and Testing Mathematics Learning Objects: A Conceptual Framework

    ERIC Educational Resources Information Center

    Mgombelo, Joyce R.; Buteau, Chantal

    2009-01-01

    This article describes a conceptual framework developed to illuminate how prospective teachers' learning experiences are shaped by didactic-sensitive activities in departments of mathematics. We draw from the experiences of prospective teachers in the Department of Mathematics at our institution in designing, implementing (i.e. computer…

  17. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false How did the Service design the National Framework? 86.102 Section 86.102 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG) PROGRAM...

  18. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false How did the Service design the National Framework? 86.102 Section 86.102 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG) PROGRAM...

  19. The Role of a Reusable Assessment Framework in Designing Computer-Based Learning Environments.

    ERIC Educational Resources Information Center

    Park, Young; Bauer, Malcolm

    This paper introduces the concept of a reusable assessment framework (RAF). An RAF contains a library of linked assessment design objects that express: (1) specific set of proficiencies (i.e. the knowledge, skills, and abilities of students for a given content or skill area); (2) the types of evidence that can be used to estimate those…

  20. A Framework for Designing a Research-Based "Maths Counsellor" Teacher Programme

    ERIC Educational Resources Information Center

    Jankvist, Uffe Thomas; Niss, Mogens

    2015-01-01

    This article addresses one way in which decades of mathematics education research results can inform practice, by offering a framework for designing and implementing an in-service teacher education programme for upper secondary mathematics teachers in Denmark. The programme aims to educate a "task force" of so-called "maths…

  1. Designing a Virtual Olympic Games Framework by Using Simulation in Web 2.0 Technologies

    ERIC Educational Resources Information Center

    Stoilescu, Dorian

    2013-01-01

    Instructional simulation had major difficulties in the past for offering limited possibilities in practice and learning. This article proposes a link between instructional simulation and Web 2.0 technologies. More exactly, I present the design of the Virtual Olympic Games Framework (VOGF), as a significant demonstration of how interactivity in…

  2. A Framework for Analyzing Interdisciplinary Tasks: Implications for Student Learning and Curricular Design

    ERIC Educational Resources Information Center

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra

    2013-01-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary…

  3. Design, Implementation and Validation of a Europe-Wide Pedagogical Framework for E-Learning

    ERIC Educational Resources Information Center

    Granic, Andrina; Mifsud, Charles; Cukusic, Maja

    2009-01-01

    Within the context of a Europe-wide project UNITE, a number of European partners set out to design, implement and validate a pedagogical framework (PF) for e- and m-Learning in secondary schools. The process of formulating and testing the PF was an evolutionary one that reflected the experiences and skills of the various European partners and…

  4. 78 FR 9633 - Policy Statement on the Scenario Design Framework for Stress Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-11

    ... November 23, 2012, at 77 FR 70124 remains February 15, 2013. FOR FURTHER INFORMATION CONTACT: Tim Clark... Federal Register of November 23, 2012, (77 FR 70124) requesting public comment on a policy statement on... CFR Part 252 RIN 7100-AD-86 Policy Statement on the Scenario Design Framework for Stress...

  5. Universal Instructional Design: A New Framework for Accommodating Students in Social Work Courses

    ERIC Educational Resources Information Center

    Lightfoot, Elizabeth; Gibson, Priscilla

    2005-01-01

    This article provides an analysis of the current method of accommodating students with disabilities in social work education and presents a new framework for providing universal access to all students in social work education: Universal Instructional Design (UID). UID goes beyond adapting already developed social work curricula to fit the needs of…

  6. Developing a Framework for Social Technologies in Learning via Design-Based Research

    ERIC Educational Resources Information Center

    Parmaxi, Antigoni; Zaphiris, Panayiotis

    2015-01-01

    This paper reports on the use of design-based research (DBR) for the development of a framework that grounds the use of social technologies in learning. The paper focuses on three studies which step on the learning theory of constructionism. Constructionism assumes that knowledge is better gained when students find this knowledge for themselves…

  7. Design of hydrophilic metal organic framework water adsorbents for heat reallocation.

    PubMed

    Cadiau, Amandine; Lee, Ji Sun; Damasceno Borges, Daiane; Fabry, Paul; Devic, Thomas; Wharmby, Michael T; Martineau, Charlotte; Foucher, Damien; Taulelle, Francis; Jun, Chul-Ho; Hwang, Young Kyu; Stock, Norbert; De Lange, Martijn F; Kapteijn, Freek; Gascon, Jorge; Maurin, Guillaume; Chang, Jong-San; Serre, Christian

    2015-08-26

    A new hydrothermally stable Al polycarboxylate metal-organic framework (MOF) based on a heteroatom bio-derived aromatic spacer is designed through a template-free green synthesis process. It appears that in some test conditions this MOF outperforms the heat reallocation performances of commercial SAPO-34. PMID:26193346

  8. An Ontology-Based Framework for Bridging Learning Design and Learning Content

    ERIC Educational Resources Information Center

    Knight, Colin, Gasevic, Dragan; Richards, Griff

    2006-01-01

    The paper describes an ontology-based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly…

  9. OPENCORE NMR: open-source core modules for implementing an integrated FPGA-based NMR spectrometer.

    PubMed

    Takeda, Kazuyuki

    2008-06-01

    A tool kit for implementing an integrated FPGA-based NMR spectrometer [K. Takeda, A highly integrated FPGA-based nuclear magnetic resonance spectrometer, Rev. Sci. Instrum. 78 (2007) 033103], referred to as the OPENCORE NMR spectrometer, is open to public. The system is composed of an FPGA chip and several peripheral boards for USB communication, direct-digital synthesis (DDS), RF transmission, signal acquisition, etc. Inside the FPGA chip have been implemented a number of digital modules including three pulse programmers, the digital part of DDS, a digital quadrature demodulator, dual digital low-pass filters, and a PC interface. These FPGA core modules are written in VHDL, and their source codes are available on our website. This work aims at providing sufficient information with which one can, given some facility in circuit board manufacturing, reproduce the OPENCORE NMR spectrometer presented here. Also, the users are encouraged to modify the design of spectrometer according to their own specific needs. A home-built NMR spectrometer can serve complementary roles to a sophisticated commercial spectrometer, should one comes across such new ideas that require heavy modification to hardware inside the spectrometer. This work can lower the barrier of building a handmade NMR spectrometer in the laboratory, and promote novel and exciting NMR experiments. PMID:18374613

  10. A comprehensive preference-based optimization framework with application to high-lift aerodynamic design

    NASA Astrophysics Data System (ADS)

    Carrese, Robert; Winarto, Hadi; Li, Xiaodong; Sóbester, András; Ebenezer, Samuel

    2012-10-01

    An integral component of transport aircraft design is the high-lift configuration, which can provide significant benefits in aircraft payload-carrying capacity. However, aerodynamic optimization of a high-lift configuration is a computationally challenging undertaking, due to the complex flow-field. The use of a designer-interactive multiobjective optimization framework is proposed, which identifies and exploits preferred regions of the Pareto frontier. Visual data mining tools are introduced to statistically extract information from the design space and confirm the relative influence of both variables and objectives to the preferred interests of the designer. The framework is assisted by the construction of time-adaptive Kriging models, which are cooperatively used with a high-fidelity Reynolds-averaged Navier-Stokes solver. The successful integration of these design tools is facilitated through the specification of a reference point, which can ideally be based on an existing design configuration. The framework is demonstrated to perform efficiently for the present case-study within the imposed computational budget.

  11. FPGA systems development based on universal controller module

    NASA Astrophysics Data System (ADS)

    Graczyk, Rafał; Pożniak, Krzysztof T.; Romaniuk, Ryszard S.

    2008-01-01

    This paper describes hardware and software concept of Universal Controller Module (UCM), a FPGA/PowerPC based embedded system designed to work as a part of VME system. UCM, on one hand, provides access to the VME crate with various laboratory or industrial interfaces like gigabit optical links, 10/100 Mbit Ethernet, Universal Serial Bus (USB), Controller Area Network (CAN), on the other hand UCM is a well prepared platform for further investigations and development in IP cores field, in functionality expansion by PCI Mezzanine Card (PMC).

  12. Developing a framework for qualitative engineering: Research in design and analysis of complex structural systems

    NASA Technical Reports Server (NTRS)

    Franck, Bruno M.

    1990-01-01

    The research is focused on automating the evaluation of complex structural systems, whether for the design of a new system or the analysis of an existing one, by developing new structural analysis techniques based on qualitative reasoning. The problem is to identify and better understand: (1) the requirements for the automation of design, and (2) the qualitative reasoning associated with the conceptual development of a complex system. The long-term objective is to develop an integrated design-risk assessment environment for the evaluation of complex structural systems. The scope of this short presentation is to describe the design and cognition components of the research. Design has received special attention in cognitive science because it is now identified as a problem solving activity that is different from other information processing tasks (1). Before an attempt can be made to automate design, a thorough understanding of the underlying design theory and methodology is needed, since the design process is, in many cases, multi-disciplinary, complex in size and motivation, and uses various reasoning processes involving different kinds of knowledge in ways which vary from one context to another. The objective is to unify all the various types of knowledge under one framework of cognition. This presentation focuses on the cognitive science framework that we are using to represent the knowledge aspects associated with the human mind's abstraction abilities and how we apply it to the engineering knowledge and engineering reasoning in design.

  13. Optimization of experimental design in fMRI: a general framework using a genetic algorithm.

    PubMed

    Wager, Tor D; Nichols, Thomas E

    2003-02-01

    This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184

  14. Crisis crowdsourcing framework: designing strategic configurations of crowdsourcing for the emergency management domain

    USGS Publications Warehouse

    Liu, Sophia B.

    2014-01-01

    Crowdsourcing is not a new practice but it is a concept that has gained significant attention during recent disasters. Drawing from previous work in the crisis informatics, disaster sociology, and computer-supported cooperative work (CSCW) literature, the paper first explains recent conceptualizations of crowdsourcing and how crowdsourcing is a way of leveraging disaster convergence. The CSCW concept of “articulation work” is introduced as an interpretive frame for extracting the salient dimensions of “crisis crowdsourcing.” Then, a series of vignettes are presented to illustrate the evolution of crisis crowdsourcing that spontaneously emerged after the 2010 Haiti earthquake and evolved to more established forms of public engagement during crises. The best practices extracted from the vignettes clarified the efforts to formalize crisis crowdsourcing through the development of innovative interfaces designed to support the articulation work needed to facilitate spontaneous volunteer efforts. Extracting these best practices led to the development of a conceptual framework that unpacks the key dimensions of crisis crowdsourcing. The Crisis Crowdsourcing Framework is a systematic, problem-driven approach to determining the why, who, what, when, where, and how aspects of a crowdsourcing system. The framework also draws attention to the social, technological, organizational, and policy (STOP) interfaces that need to be designed to manage the articulation work involved with reducing the complexity of coordinating across these key dimensions. An example of how to apply the framework to design a crowdsourcing system is offered with with a discussion on the implications for applying this framework as well as the limitations of this framework. Innovation is occurring at the social, technological, organizational, and policy interfaces enabling crowdsourcing to be operationalized and integrated into official products and services.

  15. Pixel response non-uniformity correction for multi-TDICCD camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhai, Guofang

    2013-10-01

    A non-uniformity correction algorithm is proposed and implemented on a Field-Programmable Gate Array (FPGA) hardware platform to solve a pixel response non-uniformity(PRNU) problem of multi Time Delay and Integration Charge Couple Device(TDICCD) camera. The non-uniformity are introduced and the synthetical correction algorithm is presented, in which the two-point correction method is used in a single channel, gain averaging correction method among multi-channel and the sceneadaptive correction method among multi-TDICCD. Then, the correction algorithm is designed. Finally, analyzing the FPGA ability for fix-point processing, the correction algorithm is optimized, and implemented on FPGA. Testing results indicate that the non-uniformity can be decreased from 8.27% to 0.51% for three TDICCDs camera's images with the proposed correction algorithm, proving that this correction algorithm is with high real-time performance, great engineering realization and satisfaction for the system requirements.

  16. 10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Coarasa, J. A.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Ozga, W.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Spataru, A. C.; Stieger, B.; Sumorok, K.; Veverka, J.; Wakefield, C. C.; Zejdl, P.

    2013-12-01

    For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.

  17. A reuse-based framework for the design of analog and mixed-signal ICs

    NASA Astrophysics Data System (ADS)

    Castro-Lopez, Rafael; Fernandez, Francisco V.; Rodriguez Vazquez, Angel

    2005-06-01

    Despite the spectacular breakthroughs of the semiconductor industry, the ability to design integrated circuits (ICs) under stringent time-to-market (TTM) requirements is lagging behind integration capacity, so far keeping pace with still valid Moore"s Law. The resulting gap is threatening with slowing down such a phenomenal growth. The design community believes that it is only by means of powerful CAD tools and design methodologies - and, possibly, a design paradigm shift - that this design gap can be bridged. In this sense, reuse-based design is seen as a promising solution, and concepts such as IP Block, Virtual Component, and Design Reuse have become commonplace thanks to the significant advances in the digital arena. Unfortunately, the very nature of analog and mixed-signal (AMS) design has hindered a similar level of consensus and development. This paper presents a framework for the reuse-based design of AMS circuits. The framework is founded on three key elements: (1) a CAD-supported hierarchical design flow that facilitates the incorporation of AMS reusable blocks, reduces the overall design time, and expedites the management of increasing AMS design complexity; (2) a complete, clear definition of the AMS reusable block, structured into three separate facets or views: the behavioral, structural, and layout facets, the two first for top-down electrical synthesis and bottom-up verification, the latter used during bottom-up physical synthesis; (3) the design for reusability set of tools, methods, and guidelines that, relying on intensive parameterization as well as on design knowledge capture and encapsulation, allows to produce fully reusable AMS blocks. A case study and a functional silicon prototype demonstrate the validity of the paper"s proposals.

  18. Alternative Model-Based and Design-Based Frameworks for Inference from Samples to Populations: From Polarization to Integration

    ERIC Educational Resources Information Center

    Sterba, Sonya K.

    2009-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs.…

  19. Zeolite-like metal–organic frameworks (ZMOFs): Design, synthesis, and properties

    SciTech Connect

    Eddaoudi, Mohamed; Sava, Dorina F.; Eubank, Jarrod F.; Adil, Karim; Guillerm, Vincent

    2015-10-24

    This study highlights various design and synthesis approaches toward the construction of ZMOFs, which are metal–organic frameworks (MOFs) with topologies and, in some cases, features akin to traditional inorganic zeolites. The interest in this unique subset of MOFs is correlated with their exceptional characteristics arising from the periodic pore systems and distinctive cage-like cavities, in conjunction with modular intra- and/or extra-framework components, which ultimately allow for tailoring of the pore size, pore shape, and properties towards specific applications.

  20. Rationally designed micropores within a metal-organic framework for selective sorption of gas molecules.

    PubMed

    Chen, Banglin; Ma, Shengqian; Zapata, Fatima; Fronczek, Frank R; Lobkovsky, Emil B; Zhou, Hong-Cai

    2007-02-19

    A microporous metal-organic framework, MOF, Cu(FMA)(4,4'-Bpe)0.5 (3a, FMA = fumarate; 4,4'-Bpe = 4,4'-Bpe = trans-bis(4-pyridyl)ethylene) was rationally designed from a primitive cubic net whose pores are tuned by double framework interpenetration. With pore cavities of about 3.6 A, which are interconnected by pore windows of 2.0 x 3.2 A, 3a shows highly selective sorption behaviors of gas molecules. PMID:17291116

  1. Designed Assembly of Heterometallic Cluster Organic Frameworks Based on Anderson-Type Polyoxometalate Clusters.

    PubMed

    Li, Xin-Xiong; Wang, Yang-Xin; Wang, Rui-Hu; Cui, Cai-Yan; Tian, Chong-Bin; Yang, Guo-Yu

    2016-05-23

    A new approach to prepare heterometallic cluster organic frameworks has been developed. The method was employed to link Anderson-type polyoxometalate (POM) clusters and transition-metal clusters by using a designed rigid tris(alkoxo) ligand containing a pyridyl group to form a three-fold interpenetrated anionic diamondoid structure and a 2D anionic layer, respectively. This technique facilitates the integration of the unique inherent properties of Anderson-type POM clusters and cuprous iodide clusters into one cluster organic framework. PMID:27061042

  2. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2011-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  3. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2012-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  4. Role-Based Design: "A Contemporary Framework for Innovation and Creativity in Instructional Design"

    ERIC Educational Resources Information Center

    Hokanson, Brad; Miller, Charles

    2009-01-01

    This is the first in a series of four articles presenting a new outlook on the process of instructional design. Along with offering an improvement to current practice, the goal is to stimulate discussion about the role of designers, and more importantly, about the nature of the process of instructional design. The authors present in this article a…

  5. An interdisciplinary team communication framework and its application to healthcare 'e-teams' systems design

    PubMed Central

    2009-01-01

    Background There are few studies that examine the processes that interdisciplinary teams engage in and how we can design health information systems (HIS) to support those team processes. This was an exploratory study with two purposes: (1) To develop a framework for interdisciplinary team communication based on structures, processes and outcomes that were identified as having occurred during weekly team meetings. (2) To use the framework to guide 'e-teams' HIS design to support interdisciplinary team meeting communication. Methods An ethnographic approach was used to collect data on two interdisciplinary teams. Qualitative content analysis was used to analyze the data according to structures, processes and outcomes. Results We present details for team meta-concepts of structures, processes and outcomes and the concepts and sub concepts within each meta-concept. We also provide an exploratory framework for interdisciplinary team communication and describe how the framework can guide HIS design to support 'e-teams'. Conclusion The structures, processes and outcomes that describe interdisciplinary teams are complex and often occur in a non-linear fashion. Electronic data support, process facilitation and team video conferencing are three HIS tools that can enhance team function. PMID:19754966

  6. An economic decision framework using modeling for improving aquifer remediation design

    SciTech Connect

    James, B.R.; Gwo, J.P.; Toran, L.E.

    1995-11-01

    Reducing cost is a critical challenge facing environmental remediation today. One of the most effective ways of reducing costs is to improve decision-making. This can range from choosing more cost- effective remediation alternatives (for example, determining whether a groundwater contamination plume should be remediated or not) to improving data collection (for example, determining when data collection should stoop). Uncertainty in site conditions presents a major challenge for effective decision-making. We present a framework for increasing the effectiveness of remedial design decision-making at groundwater contamination sites where there is uncertainty in many parameters that affect remediation design. The objective is to provide an easy-to-use economic framework for making remediation decisions. The presented framework is used to 1) select the best remedial design from a suite of possible ones, 2) estimate if additional data collection is cost-effective, and 3) determine the most important parameters to be sampled. The framework is developed by combining elements from Latin-Hypercube simulation of contaminant transport, economic risk-cost-benefit analysis, and Regional Sensitivity Analysis (RSA).

  7. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Tsai; Wang, Chun-Hao; Huang, Ing-Jer; Wong, Weng-Fai

    2011-12-01

    Advances in technology are making it possible to run three-dimensional (3D) graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API), device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC) accelerator using transaction-level modeling (TLM). This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  8. Designing computer learning environments for engineering and computer science: The scaffolded knowledge integration framework

    NASA Astrophysics Data System (ADS)

    Linn, Marcia C.

    1995-06-01

    Designing effective curricula for complex topics and incorporating technological tools is an evolving process. One important way to foster effective design is to synthesize successful practices. This paper describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering. One course enhancement, the LISP Knowledge Integration Environment, improved learning and resulted in more gender-equitable outcomes. The second course enhancement, the spatial reasoning environment, addressed spatial reasoning in an introductory engineering course. This enhancement minimized the importance of prior knowledge of spatial reasoning and helped students develop a more comprehensive repertoire of spatial reasoning strategies. Taken together, the instructional research programs reinforce the value of the scaffolded knowledge integration framework and suggest directions for future curriculum reformers.

  9. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  10. Report of the Odyssey FPGA Independent Assessment Team

    NASA Technical Reports Server (NTRS)

    Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)

    2001-01-01

    An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.

  11. A Framework of Multimedia E-Learning Design for Engineering Training

    NASA Astrophysics Data System (ADS)

    Borissova, Daniela; Mustakerov, Ivan

    The paper presents a new framework approach for design and development of an interactive multimedia e-learning system for engineering training. The main goal of the paper is to encourage low cost developing of effective and customized e-learning systems for engineering training by using popular and inexpensive software tools. The proposed framework is a generalization of the authors’ experience gained in developing of a pneumoautomatics e-training system. It can be used for developing of Web-based on-line or off-line e-learning systems for students or specialists customized training. The proposed framework is illustrated by some screen snapshots and descriptions of operational algorithms. The software realization of the pneumoautomatics example is done by means of HTML and JavaScript languages and was tested and used for students training.

  12. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  13. RIPOSTE: a framework for improving the design and analysis of laboratory-based research

    PubMed Central

    Masca, Nicholas GD; Hensor, Elizabeth MA; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam KA; Teare, M Dawn

    2015-01-01

    Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results. DOI: http://dx.doi.org/10.7554/eLife.05519.001 PMID:25951517

  14. Towards a European Framework to Monitor Infectious Diseases among Migrant Populations: Design and Applicability

    PubMed Central

    Riccardo, Flavia; Dente, Maria Grazia; Kärki, Tommi; Fabiani, Massimo; Napoli, Christian; Chiarenza, Antonio; Giorgi Rossi, Paolo; Velasco Munoz, Cesar; Noori, Teymur; Declich, Silvia

    2015-01-01

    There are limitations in our capacity to interpret point estimates and trends of infectious diseases occurring among diverse migrant populations living in the European Union/European Economic Area (EU/EEA). The aim of this study was to design a data collection framework that could capture information on factors associated with increased risk to infectious diseases in migrant populations in the EU/EEA. The authors defined factors associated with increased risk according to a multi-dimensional framework and performed a systematic literature review in order to identify whether those factors well reflected the reported risk factors for infectious disease in these populations. Following this, the feasibility of applying this framework to relevant available EU/EEA data sources was assessed. The proposed multidimensional framework is well suited to capture the complexity and concurrence of these risk factors and in principle applicable in the EU/EEA. The authors conclude that adopting a multi-dimensional framework to monitor infectious diseases could favor the disaggregated collection and analysis of migrant health data. PMID:26393623

  15. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane

  16. Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R.

    2014-01-01

    In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…

  17. Gamers as Designers: A Framework for Investigating Design in Gaming Affinity Spaces

    ERIC Educational Resources Information Center

    Duncan, Sean C.

    2010-01-01

    This article addresses recent approaches to uncovering and theorizing the design activities that occur in online gaming affinity spaces. Examples are presented of productive d/Discourse present within online forums around three video game series, video games, or game platforms, and key design practices engaged upon by gamers in these spaces. It is…

  18. A framework for the Subaru Telescope observation control system based on the command design pattern

    NASA Astrophysics Data System (ADS)

    Jeschke, Eric; Bon, Bruce; Inagaki, Takeshi; Streeper, Sam

    2008-08-01

    Subaru Telescope is developing a second-generation Observation Control System that specifically addresses some of the deficiencies of the current Subaru OCS. One area of concern is better extensibility: the current system uses a custom language for implementing commands with a complex macro processing subsystem written in C. It is laborious to improve the language and awkward for scientists to extend and use standard programming techniques. Our Generation 2 OCS provides a lightweight, object-oriented task framework based on the Command design pattern. The framework provides a base task class that abstracts services for processing status and other common infrastructure activities. Upon this are built and provided a set of "atomic" tasks for telescope and instrument commands. A set of "container" tasks based on common sequential and concurrent command processing paradigms is also included. Since all tasks share the same exact interface, it is straightforward to build up compound tasks by plugging simple tasks into container tasks and container tasks into other containers, and so forth. In this way various advanced astronomical workflows can be readily created, with well controlled behaviors. In addition, since tasks are written in Python, it is easy for astronomers to subclass and extend the standard observatory tasks with their own custom extensions and behaviors, in a high-level, full-featured programming language. In this talk we will provide an overview of the task framework design and present preliminary results on the use of the framework during two separate engineering runs.

  19. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  20. FHAST: FPGA-Based Acceleration of Bowtie in Hardware.

    PubMed

    Fernandez, Edward B; Villarreal, Jason; Lonardi, Stefano; Najjar, Walid A

    2015-01-01

    While the sequencing capability of modern instruments continues to increase exponentially, the computational problem of mapping short sequenced reads to a reference genome still constitutes a bottleneck in the analysis pipeline. A variety of mapping tools (e.g., Bowtie, BWA) is available for general-purpose computer architectures. These tools can take many hours or even days to deliver mapping results, depending on the number of input reads, the size of the reference genome and the number of allowed mismatches or insertion/deletions, making the mapping problem an ideal candidate for hardware acceleration. In this paper, we present FHAST (FPGA hardware accelerated sequence-matching tool), a drop-in replacement for Bowtie that uses a hardware design based on field programmable gate arrays (FPGA). Our architecture masks memory latency by executing multiple concurrent hardware threads accessing memory simultaneously. FHAST is composed by multiple parallel engines to exploit the parallelism available to us on an FPGA. We have implemented and tested FHAST on the Convey HC-1 and later ported on the Convey HC-2ex, taking advantage of the large memory bandwidth available to these systems and the shared memory image between hardware and software. A preliminary version of FHAST running on the Convey HC-1 achieved up to 70x speedup compared to Bowtie (single-threaded). An improved version of FHAST running on the Convey HC-2ex FPGAs achieved up to 12x fold speed gain compared to Bowtie running eight threads on an eight-core conventional architecture, while maintaining almost identical mapping accuracy. FHAST is a drop-in replacement for Bowtie, so it can be incorporated in any analysis pipeline that uses Bowtie (e.g., TopHat). PMID:26451812

  1. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  2. A Strategic Approach to Curriculum Design for Information Literacy in Teacher Education--Implementing an Information Literacy Conceptual Framework

    ERIC Educational Resources Information Center

    Klebansky, Anna; Fraser, Sharon P.

    2013-01-01

    This paper details a conceptual framework that situates curriculum design for information literacy and lifelong learning, through a cohesive developmental information literacy based model for learning, at the core of teacher education courses at UTAS. The implementation of the framework facilitates curriculum design that systematically,…

  3. A conceptual curriculum framework designed to ensure quality student health visitor training in practice.

    PubMed

    Hollinshead, Jayne; Stirling, Linda

    2014-07-01

    This paper describes the challenges faced by a trust in England following the introduction of the Health Visitor Implementation Plan. Two practice education facilitators designed a conceptual curriculum framework to ensure quality student health visitor education in practice. This curriculum complimented the excellent academic course already delivered by the University. A justification is provided for the design of the curriculum framework, including a rationale for the introduction of specific training sessions. Student and practice teacher feedback demonstrate the success of the introduction of this programme to ensure the development of student health visitors fit for practice. The conclusion places emphasis on the importance of continuous evaluation of the training programme to meet the needs of the students and the service. PMID:25167726

  4. Strategic approaches to drug design. I. An integrated software framework for molecular modelling.

    PubMed

    Vinter, J G; Davis, A; Saunders, M R

    1987-04-01

    An integrated molecular graphics and computational chemistry framework is described which has been designed primarily to handle small molecules of up to 300 atoms. The system provides a means of integrating software from any source into a single framework. It is split into two functional subsystems. The first subsystem, called COSMIC, runs on low-cost, serial-linked colour graphics terminals and allows the user to prepare and examine structural data and to submit them for extensive computational chemistry. Links also allow access to databases, other modelling systems and user-written modules. Much of the output from COSMIC cannot be examined with low level graphics. A second subsystem, called ASTRAL, has been developed for the high-resolution Evans & Sutherland PS300 colour graphics terminal and is designed to manipulate complex display structures. The COSMIC minimisers, geometry investigators, molecular orbital displays, electrostatic isopotential generators and various interfaces and utilities are described. PMID:3505586

  5. Analysing task design and students' responses to context-based problems through different analytical frameworks

    NASA Astrophysics Data System (ADS)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  6. FPGA Sequencer for Radar Altimeter Applications

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew C.; Pollard, Brian D.; Chen, Curtis W.

    2011-01-01

    A sequencer for a radar altimeter provides accurate attitude information for a reliable soft landing of the Mars Science Laboratory (MSL). This is a field-programmable- gate-array (FPGA)-only implementation. A table loaded externally into the FPGA controls timing, processing, and decision structures. Radar is memory-less and does not use previous acquisitions to assist in the current acquisition. All cycles complete in exactly 50 milliseconds, regardless of range or whether a target was found. A RAM (random access memory) within the FPGA holds instructions for up to 15 sets. For each set, timing is run, echoes are processed, and a comparison is made. If a target is seen, more detailed processing is run on that set. If no target is seen, the next set is tried. When all sets have been run, the FPGA terminates and waits for the next 50-millisecond event. This setup simplifies testing and improves reliability. A single vertex chip does the work of an entire assembly. Output products require minor processing to become range and velocity. This technology is the heart of the Terminal Descent Sensor, which is an integral part of the Entry Decent and Landing system for MSL. In addition, it is a strong candidate for manned landings on Mars or the Moon.

  7. Testing Microshutter Arrays Using Commercial FPGA Hardware

    NASA Technical Reports Server (NTRS)

    Rapchun, David

    2008-01-01

    NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.

  8. Experiences on 64 and 150 FPGA Systems

    SciTech Connect

    Storaasli, Olaf O; Strenski, Dave

    2008-01-01

    Four FPGA systems were evaluated: the Cray XD1 system with 6 FPGAs at ORNL and Cray, the Cray XD1 system with 150 FPGAs at NRL* and the 64 FPGAs on Edinburgh s Maxwell . Their hardware and software architectures, programming tools and performance on scientific applications are discussed. FPGA speedup (over a 2.2 GHz Opteron) of 10X was typical for matrix equation solution, molecular dynamics and weather/climate codes and upto 100X for human genome DNA sequencing. Large genome comparisons requiring 12.5 years for an Opteron took less than 24 hours on NRL s Cray XD1 with 150 Virtex FPGAs for a 7,350X speedup. pipeline so each query and database character are compared in parallel, resulting in a table of scores. Genome Sequencing Results: FPGA timing results (for up to 150 FPGAs) were obtained and compared with up to 150 Opterons for sequences of varying size and complexity (e.g. 4GB openfpga.org human DNA benchmark and 155M human vs. 166M mouse DNA). 1 FPGA: Bacillus_anthracis DNA compare: Genomes

  9. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment. PMID:27468841

  10. Computational framework to model and design surgical meshes for hernia repair.

    PubMed

    Hernández-Gascón, B; Espés, N; Peña, E; Pascual, G; Bellón, J M; Calvo, B

    2014-08-01

    Surgical procedures for hernia surgery are usually performed using prosthetic meshes. In spite of all the improvements in these biomaterials, the perfect match between the prosthesis and the implant site has not been achieved. Thus, new designs of surgical meshes are still being developed. Previous to implantation in humans, the validity of the meshes has to be addressed, and to date experimental studies have been the gold standard in testing and validating new implants. Nevertheless, these procedures involve long periods of time and are expensive. Thus, a computational framework for the simulation of prosthesis and surgical procedures may overcome some disadvantages of the experimental methods. The computational framework includes two computational models for designing and validating the behaviour of new meshes, respectively. Firstly, the beam model, which reproduces the exact geometry of the mesh, is set to design the weave and determine the stiffness of the surgical prosthesis. However, this implies a high computational cost whereas the membrane model, defined within the framework of the large deformation hyperelasticity, is a relatively inexpensive computational tool, which also enables a prosthesis to be included in more complex geometries such as human or animal bodies. PMID:23167618

  11. A framework design for the mHealth system for self-management promotion.

    PubMed

    Jia, Guifeng; Yang, Pan; Zhou, Jie; Zhang, Hengyi; Lin, Chengyu; Chen, Jin; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    Mobile health (mHealth) technology has been proposed to alleviate the lack of sufficient medical resources for personal healthcare. However, usage difficulties and compliance issues relating to this technology restrict the effect of mHealth system-supported self-management. In this study, an mHealth framework is introduced to overcome these drawbacks and improve the outcome of self-management. We implemented a set of ease of use principles in the mHealth design and employed the quantitative Fogg Behavior Model to enhance users' execution ability. The framework was realized in a prototype design for the mHealth system, which consists of medical apparatuses, mobile applications and a health management server. The system is able to monitor the physiological status in an unconstrained manner with simplified operations, while supervising the healthcare plan. The results suggest that the present framework design is accessible for ordinary users and effective in improving users' execution ability in self-management. PMID:26405941

  12. Information Model Driven Semantic Framework Architecture and Design for Distributed Data Repositories

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; Semantic eScience Framework Team

    2011-12-01

    In Earth and space science, the steady evolution away from isolated and single purpose data 'systems' toward systems of systems, data ecosystems, or data frameworks that provide access to highly heterogeneous data repositories is picking up in pace. As a result, common informatics approaches are being sought for how newer architectures are developed and/or implemented. In particular, a clear need to have a repeatable method for modeling, implementing and evolving the information architectures has emerged and one that goes beyond traditional software design. This presentation outlines new component design approaches bases in sets of information model and semantic encodings for mediation.

  13. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  14. An efficient and flexible web services-based multidisciplinary design optimisation framework for complex engineering systems

    NASA Astrophysics Data System (ADS)

    Li, Liansheng; Liu, Jihong

    2012-08-01

    Multidisciplinary design optimisation (MDO) involves multiple disciplines, multiple coupled relationships and multiple processes, which is implemented by different specialists dispersed geographically on heterogeneous platforms with different analysis and optimisation tools. The product design data integration and data sharing among the participants hampers the development and applications of MDO in enterprises seriously. Therefore, a multi-hierarchical integrated product design data model (MH-iPDM) supporting the MDO in the web environment and a web services-based multidisciplinary design optimisation (Web-MDO) framework are proposed in this article. Based on the enabling technologies including web services, ontology, workflow, agent, XML and evidence theory, the proposed framework enables the designers geographically dispersed to work collaboratively in the MDO environment. The ontology-based workflow enables the logical reasoning of MDO to be processed dynamically. The evidence theory-based uncertainty reasoning and analysis supports the quantification, aggregation and analysis of the conflicting epistemic uncertainty from multiple sources, which improves the quality of product. Finally, a proof-of-concept prototype system is developed using J2EE and an example of supersonic business jet is demonstrated to verify the autonomous execution of MDO strategies and the effectiveness of the proposed approach.

  15. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  16. A Markovian state-space framework for integrating flexibility into space system design decisions

    NASA Astrophysics Data System (ADS)

    Lafleur, Jarret M.

    The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of

  17. Influence of framework design, contraction mismatch, and thermal history on porcelain checking in fixed partial dentures.

    PubMed

    Anusavice, K J; Gray, A E

    1989-01-01

    The objective of this study was to characterize the relative influence of contraction mismatch, framework design, furnace type, cooling rate, and multiple firings on immediate or delayed checking in fixed partial dentures. Frameworks for 60 anterior bridges (three-unit fixed partial dentures) were cast from a low-expansion Au-Pd alloy (O) and a high-expansion Pd-Ag alloy (J). A high-expansion porcelain (B) was applied to each of three framework designs. Firing was performed at heating rates of 56 degrees C/min and 180 degrees C/min. Specimens were cooled at two rates after each of five glazing cycles. For O-B specimens which exhibited a negative thermal contraction mismatch between 600 degrees C and 25 degrees C, 60% of the bridge specimens failed when they were subjected to slow cooling preceded by either fast or slow heating. When J-B specimens (which exhibited a smaller negative contraction mismatch) were heated and cooled rapidly, no failures occurred through all of the firing cycles. However, cracks were observed in 13.3% of the J-B bridges which were slowly heated and rapidly cooled. Delayed cracks (after the fifth glaze cycle) developed over periods of up to two years only in bridges which were slowly cooled in the furnace chamber. The results of this study suggest that checking in conventional feldspathic porcelains can be promoted by slow cooling rates and an excessive number of firing cycles. PMID:2691298

  18. A framework for evaluating and designing citizen science programs for natural resources monitoring.

    PubMed

    Chase, Sarah K; Levine, Arielle

    2016-06-01

    We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring. PMID:27111860

  19. FPGA Implementation of Metastability-Based True Random Number Generator

    NASA Astrophysics Data System (ADS)

    Hata, Hisashi; Ichikawa, Shuichi

    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.

  20. An FPGA-based open platform for ultrasound biomicroscopy.

    PubMed

    Qiu, Weibao; Yu, Yanyan; Tsang, Fu; Sun, Lei

    2012-07-01

    Ultrasound biomicroscopy (UBM) has been extensively applied to preclinical studies in small animal models. Individual animal study is unique and requires different utilization of the UBM system to accommodate different transducer characteristics, data acquisition strategies, signal processing, and image reconstruction methods. There is a demand for a flexible and open UBM platform to allow users to customize the system for various studies and have full access to experimental data. This paper presents the development of an open UBM platform (center frequency 20 to 80 MHz) for various preclinical studies. The platform design was based on a field-programmable gate array (FPGA) embedded in a printed circuit board to achieve B-mode imaging and directional pulsed-wave Doppler. Instead of hardware circuitry, most functions of the platform, such as filtering, envelope detection, and scan conversion, were achieved by FPGA programs; thus, the system architecture could be easily modified for specific applications. In addition, a novel digital quadrature demodulation algorithm was implemented for fast and accurate Doppler profiling. Finally, test results showed that the platform could offer a minimum detectable signal of 25 μV, allowing a 51 dB dynamic range at 47 dB gain, and real-time imaging at more than 500 frames/s. Phantom and in vivo imaging experiments were conducted and the results demonstrated good system performance. PMID:22828839

  1. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  2. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion. PMID:17449168

  3. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  4. Design of additive quantum codes via the code-word-stabilized framework

    SciTech Connect

    Kovalev, Alexey A.; Pryadko, Leonid P.; Dumer, Ilya

    2011-12-15

    We consider design of the quantum stabilizer codes via a two-step, low-complexity approach based on the framework of codeword-stabilized (CWS) codes. In this framework, each quantum CWS code can be specified by a graph and a binary code. For codes that can be obtained from a given graph, we give several upper bounds on the distance of a generic (additive or nonadditive) CWS code, and the lower Gilbert-Varshamov bound for the existence of additive CWS codes. We also consider additive cyclic CWS codes and show that these codes correspond to a previously unexplored class of single-generator cyclic stabilizer codes. We present several families of simple stabilizer codes with relatively good parameters.

  5. Molecular docking sites designed for the generation of highly crystalline covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Ascherl, Laura; Sick, Torben; Margraf, Johannes T.; Lapidus, Saul H.; Calik, Mona; Hettstedt, Christina; Karaghiosoff, Konstantin; Döblinger, Markus; Clark, Timothy; Chapman, Karena W.; Auras, Florian; Bein, Thomas

    2016-04-01

    Covalent organic frameworks (COFs) formed by connecting multidentate organic building blocks through covalent bonds provide a platform for designing multifunctional porous materials with atomic precision. As they are promising materials for applications in optoelectronics, they would benefit from a maximum degree of long-range order within the framework, which has remained a major challenge. We have developed a synthetic concept to allow consecutive COF sheets to lock in position during crystal growth, and thus minimize the occurrence of stacking faults and dislocations. Hereby, the three-dimensional conformation of propeller-shaped molecular building units was used to generate well-defined periodic docking sites, which guided the attachment of successive building blocks that, in turn, promoted long-range order during COF formation. This approach enables us to achieve a very high crystallinity for a series of COFs that comprise tri- and tetradentate central building blocks. We expect this strategy to be transferable to a broad range of customized COFs.

  6. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  7. Metal-organic Frameworks as A Tunable Platform for Designing Functional Molecular Materials

    PubMed Central

    Wang, Cheng; Liu, Demin

    2013-01-01

    Metal-organic frameworks (MOFs), also known as coordination polymers, represent an interesting class of crystalline molecular materials that are synthesized by combining metal-connecting points and bridging ligands. The modular nature of and mild conditions for MOF synthesis have permitted the rational structural design of numerous MOFs and the incorporation of various functionalities via constituent building blocks. The resulting designer MOFs have shown promise for applications in a number of areas, including gas storage/separation, nonlinear optics/ferroelectricity, catalysis, energy conversion/storage, chemical sensing, biomedical imaging, and drug delivery. The structure-property relationships of MOFs can also be readily established by taking advantage of the knowledge of their detailed atomic structures, which enables fine-tuning of their functionalities for desired applications. Through the combination of molecular synthesis and crystal engineering MOFs thus present an unprecedented opportunity for the rational and precise design of functional materials. PMID:23944646

  8. Design of a software framework to support live/virtual training on distributed terrain

    NASA Astrophysics Data System (ADS)

    Schiavone, Guy A.; Tracy, Judd; Woodruff, Eric; Dere, Troy

    2003-09-01

    In this paper we describe research and development on the concept and application of distributed terrain and distributed terrain servers to support live/virtual training operations. This includes design of a distributed, cluster-capable "Combat Server" for the virtual representation and simulation of live training exercises, and current work to support virtual representation and visualization of live indoor operations involving firefighters, SWAT teams and/or special operations forces. The Combat Server concept under development is for an object-oriented, efficient and flexible distributed platform designed for simulation and training. It can operate on any compatible, high performance computer for which the software is compliant; however, it is explicitly designed for distribution and cooperation of relatively inexpensive clustered computers, together playing the role of a large independent system. The design of the Combat Server aims to be generic and encompass any situation that involves monitoring, tracking, assessment, visualization and, eventually, simulated interactivity to compliment real-world training exercises. To accomplish such genericity, the design must incorporate techniques such as layering or abstraction to remove any dependencies on specific hardware, such as weapons, that are to eventually be employed by the system; this also includes entity tracking hardware interfaces, whether by GPS or Ultra-Wide Band technologies. The Combat Server is a framework. Its design is a foothold for building a specialized distributed system for modeling a particular style of exercise. The combat server can also be a software development framework, providing a platform for building specialized exercises while abstracting the developer from the minutia of building a real-time distributed system. In this paper we review preliminary experiments regarding basic line-of-sight (LOS) functions of the combat server functionality and scalability in a cluster computing

  9. Climate services for society: origins, institutional arrangements, and design elements for an evaluation framework

    PubMed Central

    Vaughan, Catherine; Dessai, Suraje

    2014-01-01

    Climate services involve the generation, provision, and contextualization of information and knowledge derived from climate research for decision making at all levels of society. These services are mainly targeted at informing adaptation to climate variability and change, widely recognized as an important challenge for sustainable development. This paper reviews the development of climate services, beginning with a historical overview, a short summary of improvements in climate information, and a description of the recent surge of interest in climate service development including, for example, the Global Framework for Climate Services, implemented by the World Meteorological Organization in October 2012. It also reviews institutional arrangements of selected emerging climate services across local, national, regional, and international scales. By synthesizing existing literature, the paper proposes four design elements of a climate services evaluation framework. These design elements include: problem identification and the decision-making context; the characteristics, tailoring, and dissemination of the climate information; the governance and structure of the service, including the process by which it is developed; and the socioeconomic value of the service. The design elements are intended to serve as a guide to organize future work regarding the evaluation of when and whether climate services are more or less successful. The paper concludes by identifying future research questions regarding the institutional arrangements that support climate services and nascent efforts to evaluate them. PMID:25798197

  10. Comparative effectiveness research for the clinician researcher: a framework for making a methodological design choice.

    PubMed

    Williams, Cylie M; Skinner, Elizabeth H; James, Alicia M; Cook, Jill L; McPhail, Steven M; Haines, Terry P

    2016-01-01

    Comparative effectiveness research compares two active forms of treatment or usual care in comparison with usual care with an additional intervention element. These types of study are commonly conducted following a placebo or no active treatment trial. Research designs with a placebo or non-active treatment arm can be challenging for the clinician researcher when conducted within the healthcare environment with patients attending for treatment.A framework for conducting comparative effectiveness research is needed, particularly for interventions for which there are no strong regulatory requirements that must be met prior to their introduction into usual care. We argue for a broader use of comparative effectiveness research to achieve translatable real-world clinical research. These types of research design also affect the rapid uptake of evidence-based clinical practice within the healthcare setting.This framework includes questions to guide the clinician researcher into the most appropriate trial design to measure treatment effect. These questions include consideration given to current treatment provision during usual care, known treatment effectiveness, side effects of treatments, economic impact, and the setting in which the research is being undertaken. PMID:27530915

  11. A Multi-axis Control Board Implemented via an FPGA

    NASA Astrophysics Data System (ADS)

    Longo, Domenico; Muscato, Giovanni

    Most of robotic applications rely on the use of DC motors with quadrature encoder feedback. Typical applications are legged robots or articulated chassis multi-wheeled robots. In these applications system designer must implement multi-axis control systems able to handle an high number of quadrature encoder signals and to generate the same number of PWM signals. Moreover the adopted CPU must be able to execute the same number of control loop algorithms in a time slot of about ten milliseconds. Very few commercial SoC (System on Chip) can handle up to six channels. In this work the implementation of a SoC on FPGA able to handle up to 20 channels within a time slot of 20 ms and up to 100 channels within a time slot of 100 ms is described. In order to demonstrate the effectiveness of the design, the board was used to control a small six wheels outdoor robot.

  12. FPGA-based trigger system for the Fermilab SeaQuest experimentz

    NASA Astrophysics Data System (ADS)

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-12-01

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ- produced in 120 GeV/c proton-nucleon interactions in a high rate environment. The trigger system consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  13. Remote monitoring and fault recovery for FPGA-based field controllers of telescope and instruments

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhua; Zhu, Dan; Wang, Jianing

    2012-09-01

    As the increasing size and more and more functions, modern telescopes have widely used the control architecture, i.e. central control unit plus field controller. FPGA-based field controller has the advantages of field programmable, which provide a great convenience for modifying software and hardware of control system. It also gives a good platform for implementation of the new control scheme. Because of multi-controlled nodes and poor working environment in scattered locations, reliability and stability of the field controller should be fully concerned. This paper mainly describes how we use the FPGA-based field controller and Ethernet remote to construct monitoring system with multi-nodes. When failure appearing, the new FPGA chip does self-recovery first in accordance with prerecovery strategies. In case of accident, remote reconstruction for the field controller can be done through network intervention if the chip is not being restored. This paper also introduces the network remote reconstruction solutions of controller, the system structure and transport protocol as well as the implementation methods. The idea of hardware and software design is given based on the FPGA. After actual operation on the large telescopes, desired results have been achieved. The improvement increases system reliability and reduces workload of maintenance, showing good application and popularization.

  14. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    SciTech Connect

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  15. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    DOE PAGESBeta

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined againstmore » pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.« less

  16. A novel FPGA-based bunch purity monitor system at the APS storage ring.

    SciTech Connect

    Norum, W. E.; APS Engineering Support Division

    2008-01-01

    Bunch purity is an important source quality factor for the magnetic resonance experiments at the Advanced Photon Source. Conventional bunch-purity monitors utilizing time-to-amplitude converters are subject to dead time. We present a novel design based on a single field- programmable gate array (FPGA) that continuously processes pulses at the full speed of the detector and front-end electronics. The FPGA provides 7778 single-channel analyzers (six per rf bucket). The starting time and width of each single-channel analyzer window can be set to a resolution of 178 ps. A detector pulse arriving inside the window of a single-channel analyzer is recorded in an associated 32-bit counter. The analyzer makes no contribution to the system dead time. Two channels for each rf bucket count pulses originating from the electrons in the bucket. The other four channels on the early and late side of the bucket provide estimates of the background. A single-chip microcontroller attached to the FPGA acts as an EPICS IOC to make the information in the FPGA available to the EPICS clients.

  17. The P0 feedback control system blurs the line between IOC and FPGA.

    SciTech Connect

    DiMonte, N.; APS Engineering Support Division

    2008-01-01

    The P0 Feedback system is a new design at the Advanced Photon Source (APS) primarily intended to stabilize a single bunch in order to operate at a higher accumulated charge. The algorithm for this project required a high-speed DSP solution for a single channel that would make adjustments on a turn-by-turn basis. A field programmable gate array (FPGA) solution was selected that not only met the requirements of the project but far exceeded them. By using a single FPGA, we were able to adjust up to 324 bunches on two separate channels with a total computational time of {approx} 6 x 10{sup 9} multiply- accumulate operations per second. The IOC is a Coldfire CPU tightly coupled to the FPGA, providing dedicated control and monitoring of the system through EPICS [1] process variables. One of the benefits of this configuration is having a four-channel scope in the FPGA that can be monitored on a continuous basis.

  18. Developing a Model of Practice: Designing a Framework for the Professional Development of School Leaders and Managers.

    ERIC Educational Resources Information Center

    Reeves, Jenny; Forde, Christine; Casteel, Viv; Lynas, Richard

    1998-01-01

    Describes the origins and evolution of a framework for leadership and management development in Scottish schools. The design of this competence framework is underpinned by a professional-development model supporting experiential learning and critical reflection. Calls for a synthesis of various approaches to management development based on a…

  19. An Application of the Impact Evaluation Process for Designing a Performance Measurement and Evaluation Framework in K-12 Environments

    ERIC Educational Resources Information Center

    Guerra-Lopez, Ingrid; Toker, Sacip

    2012-01-01

    This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…

  20. Guiding the Design of Lessons by Using the MAPLET Framework: Matching Aims, Processes, Learner Expertise and Technologies

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Gosper, Maree

    2014-01-01

    This paper introduces the MAPLET framework that was developed to map and link teaching aims, learning processes, learner expertise and technologies. An experimental study with 65 participants is reported to test the effectiveness of the framework as a guide to the design of lessons embedded within larger units of study. The findings indicate the…

  1. FPGA implementation of robust Capon beamformer

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Zmuda, Henry; Li, Jian; Du, Lin; Sheplak, Mark

    2012-03-01

    The Capon Beamforming algorithm is an optimal spatial filtering algorithm used in various signal processing applications where excellent interference rejection performance is required, such as Radar and Sonar systems, Smart Antenna systems for wireless communications. Its lack of robustness, however, means that it is vulnerable to array calibration errors and other model errors. To overcome this problem, numerous robust Capon Beamforming algorithms have been proposed, which are much more promising for practical applications. In this paper, an FPGA implementation of a robust Capon Beamforming algorithm is investigated and presented. This realization takes an array output with 4 channels, computes the complex-valued adaptive weight vectors for beamforming with an 18 bit fixed-point representation and runs at a 100 MHz clock on Xilinx V4 FPGA. This work will be applied in our medical imaging project for breast cancer detection.

  2. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  3. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  4. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation

    PubMed Central

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  5. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation.

    PubMed

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  6. FPGA-accelerated adaptive optics wavefront control part II

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Barth, A.; Reger, J.; Reinlein, C.; Appelfelder, M.; Beckert, E.

    2015-03-01

    We present progressive work that is based on our recently developed rapid control prototyping system (RCP), designed for the implementation of high-performance adaptive optical control algorithms using a continuous de-formable mirror (DM). The RCP system, presented in 2014, is resorting to a Xilinx Kintex-7 Field Programmable Gate Array (FPGA), placed on a self-developed PCIe card, and installed on a high-performance computer that runs a hard real-time Linux operating system. For this purpose, algorithms for the efficient evaluation of data from a Shack-Hartmann wavefront sensor (SHWFS) on an FPGA have been developed. The corresponding analog input and output cards are designed for exploiting the maximum possible performance while not being constrained to a specific DM and control algorithm due to the RCP approach. In this second part of our contribution, we focus on recent results that we achieved with this novel experimental setup. By presenting results which are far superior to the former ones, we further justify the deployment of the RCP system and its required time and resources. We conducted various experiments for revealing the effective performance, i.e. the maximum manageable complexity in the controller design that may be achieved in real-time without performance losses. A detailed analysis of the hidden latencies is carried out, showing that these latencies have been drastically reduced. In addition, a series of concepts relating the evaluation of the wavefront as well as designing and synthesizing a wavefront are thoroughly investigated with the goal to overcome some of the prevalent limitations. Furthermore, principal results regarding the closed-loop performance of the low-speed dynamics of the integrated heater in a DM concept are illustrated in detail; to be combined with the piezo-electric high-speed actuators in the next step

  7. Design, implementation and validation of a novel open framework for agile development of mobile health applications

    PubMed Central

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions. PMID:26329639

  8. Design, implementation and validation of a novel open framework for agile development of mobile health applications.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Garcia, Rafael; Saez, Alejandro; Damas, Miguel; Holgado-Terriza, Juan A; Lee, Sungyong; Pomares, Hector; Rojas, Ignacio

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions. PMID:26329639

  9. TOT measurement implemented in FPGA TDC

    NASA Astrophysics Data System (ADS)

    Fan, Huan-Huan; Cao, Ping; Liu, Shu-Bin; An, Qi

    2015-11-01

    Time measurement plays a crucial role for the purpose of particle identification in high energy physics experiments. With increasingly demanding physics goals and the development of electronics, modern time measurement systems need to meet the requirement of excellent resolution specification as well as high integrity. Based on Field Programmable Gate Arrays (FPGAs), FPGA time-to-digital converters (TDCs) have become one of the most mature and prominent time measurement methods in recent years. For correcting the time-walk effect caused by leading timing, a time-over-threshold (TOT) measurement should be added to the FPGA TDC. TOT can be obtained by measuring the interval between the signal leading and trailing edges. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels need to be used at the same time, one for leading, the other for trailing. However, this method unavoidably increases the amount of FPGA resources used and reduces the TDC's integrity. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measurement can be achieved using only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Testing shows that this TDC can achieve resolution better than 15ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measurement dead time is about two clock cycles, which makes it good for applications with higher physics event rates. Supported by National Natural Science Foundation of China (11079003, 10979003)

  10. Using FPGA Devices to Accelerate Biomolecular Simulations

    SciTech Connect

    Alam, Sadaf R; Agarwal, Pratul K; Smith, Melissa C; Vetter, Jeffrey S; Caliga, David E

    2007-03-01

    A field-programmable gate array implementation of the particle-mesh Ewald a molecular dynamics simulation method reduces the microprocessor time-to-solution by a factor of three while using only high-level languages. The application speedup on FPGA devices increases with the problem size. The authors use a performance model to analyze the potential of simulating large-scale biological systems faster than many cluster-based supercomputing platforms.

  11. A taxonomy of apatite frameworks for the crystal chemical design of fuel cell electrolytes

    SciTech Connect

    Pramana, Stevin S.; Klooster, Wim T.; White, Timothy J.

    2008-08-15

    Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Taking the neutral apatite [La{sub 8}Sr{sub 2}][(GeO{sub 4}){sub 6}]O{sub 2} as a prototype electrolyte, this classification scheme correctly predicted that 'excess' oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}, rather than the presumptive tunnel location of [La{sub 9}Sr][(GeO{sub 4}){sub 6}]O{sub 2.5}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed, with potentially superior low-temperature ion conduction, mediated by the introduction of oxygen to the framework reservoir. - Graphical abstract: Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Neutron diffraction identified that the excess oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed.

  12. Tuning the Topology and Functionality of Metal–Organic Frameworks by Ligand Design

    SciTech Connect

    Zhao, Dan; Timmons, Daren J; Yuan, Daqiang; Zhou, Hong-Cai

    2011-02-15

    Metal–organic frameworks (MOFs)—highly crystalline hybrid materials that combine metal ions with rigid organic ligands—have emerged as an important class of porous materials. The organic ligands add flexibility and diversity to the chemical structures and functions of these materials. In this Account, we summarize our laboratory’s experience in tuning the topology and functionality of MOFs by ligand design. These investigations have led to new materials with interesting properties. By using a ligand that can adopt different symmetry conformations through free internal bond rotation, we have obtained two MOFs that are supramolecular stereoisomers of each other at different reaction temperatures. In another case, where the dimerized ligands function as a D₃-Piedfort unit spacer, we achieve chiral (10,3)-a networks. In the design of MOF-based materials for hydrogen and methane storage, we focused on increasing the gas affinity of frameworks by using ligands with different geometries to control the pore size and effectively introduce unsaturated metal centers (UMCs) into the framework. Framework interpenetration in PCN-6 (PCN stands for porous coordination network) can lead to higher hydrogen uptake. Because of the proper alignment of the UMCs, PCN-12 holds the record for uptake of hydrogen at 77 K/760 Torr. In the case of methane storage, PCN-14 with anthracene-derived ligand achieves breakthrough storage capacity, at a level 28% higher than the U.S. Department of Energy target. Selective gas adsorption requires a pore size comparable to that of the target gas molecules; therefore, we use bulky ligands and network interpenetration to reduce the pore size. In addition, with the help of an amphiphilic ligand, we were able to use temperature to continuously change pore size in a 2D layer MOF. Adding charge to an organic ligand can also stabilize frameworks. By ionizing the amine group within mesoMOF-1, the resulting electronic repulsion keeps the network from

  13. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    NASA Astrophysics Data System (ADS)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  14. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design

  15. Implementing a Digital Phasemeter in an FPGA

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2008-01-01

    Firmware for implementing a digital phasemeter within a field-programmable gate array (FPGA) has been devised. In the original application of this firmware, the phase that one seeks to measure is the difference between the phases of two nominally-equal-frequency heterodyne signals generated by two interferometers. In that application, zero-crossing detectors convert the heterodyne signals to trains of rectangular pulses, the two pulse trains are fed to a fringe counter (the major part of the phasemeter) controlled by a clock signal having a frequency greater than the heterodyne frequency, and the fringe counter computes a time-averaged estimate of the difference between the phases of the two pulse trains. The firmware also does the following: Causes the FPGA to compute the frequencies of the input signals; Causes the FPGA to implement an Ethernet (or equivalent) transmitter for readout of phase and frequency values; and Provides data for use in diagnosis of communication failures. The readout rate can be set, by programming, to a value between 250 Hz and 1 kHz. Network addresses can be programmed by the user.

  16. FPGA Trigger System to Run Klystrons

    SciTech Connect

    Gray, Darius; /Texas A-M /SLAC

    2010-08-25

    The Klystron Department is in need of a new trigger system to update the laboratory capabilities. The objective of the research is to develop the trigger system using Field Programmable Gate Array (FPGA) technology with a user interface that will allow one to communicate with the FPGA via a Universal Serial Bus (USB). This trigger system will be used for the testing of klystrons. The key materials used consists of the Xilinx Integrated Software Environment (ISE) Foundation, a Programmable Read Only Memory (Prom) XCF04S, a Xilinx Spartan 3E 35S500E FPGA, Xilinx Platform Cable USB II, a Printed Circuit Board (PCB), a 100 MHz oscillator, and an oscilloscope. Key considerations include eight triggers, two of which have variable phase shifting capabilities. Once the project was completed the output signals were able to be manipulated via a Graphical User Interface by varying the delay and width of the signal. This was as planned; however, the ability to vary the phase was not completed. Future work could consist of being able to vary the phase. This project will give the operators in the Klystron Department more flexibility to run various tests.

  17. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies

    PubMed Central

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm−2, which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles. PMID:26178865

  18. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies

    NASA Astrophysics Data System (ADS)

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-07-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm-2, which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles.

  19. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  20. Adaptive change in electrically stimulated muscle: a framework for the design of clinical protocols.

    PubMed

    Salmons, Stanley

    2009-12-01

    Adult mammalian skeletal muscles have a remarkable capacity for adapting to increased use. Although this behavior is familiar from the changes brought about by endurance exercise, it is seen to a much greater extent in the response to long-term neuromuscular stimulation. The associated phenomena include a markedly increased resistance to fatigue, and this is the key to several clinical applications. However, a more rational basis is needed for designing regimes of stimulation that are conducive to an optimal outcome. In this review I examine relevant factors, such as the amount, frequency, and duty cycle of stimulation, the influence of force generation, and the animal model. From these considerations a framework emerges for the design of protocols that yield an overall functional profile appropriate to the application. Three contrasting examples illustrate the issues that need to be addressed clinically. PMID:19902542

  1. Expanding lean thinking to the product and process design and development within the framework of sustainability

    NASA Astrophysics Data System (ADS)

    Sorli, M.; Sopelana, A.; Salgado, M.; Pelaez, G.; Ares, E.

    2012-04-01

    Companies require tools to change towards a new way of developing and producing innovative products to be manufactured considering the economic, social and environmental impact along the product life cycle. Based on translating Lean principles in Product Development (PD) from the design stage and, along the entire product life cycle, it is aimed to address both sustainability and environmental issues. The drivers of sustainable culture within a lean PD have been identified and a baseline for future research on the development of appropriate tools and techniques has been provided. This research provide industry with a framework which balance environmental and sustainable factors with lean principles to be considered and incorporated from the beginning of product design and development covering the entire product lifecycle.

  2. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  3. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies.

    PubMed

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm(-2), which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles. PMID:26178865

  4. Desymmetrized Vertex Design for the Synthesis of Covalent Organic Frameworks with Periodically Heterogeneous Pore Structures.

    PubMed

    Zhu, Youlong; Wan, Shun; Jin, Yinghua; Zhang, Wei

    2015-11-01

    Two novel porous 2D covalent organic frameworks (COFs) with periodically heterogeneous pore structures were successfully synthesized through desymmetrized vertex design strategy. Condensation of C(2v) symmetric 5-(4-formylphenyl)isophthalaldehyde or 5-((4-formylphenyl)ethylene)isophthalaldehyde with linear hydrazine linker under the solvothermal or microwave heating conditions yields crystalline 2D COFs, HP-COF-1 and HP-COF-2, with high specific surface areas and dual pore structures. PXRD patterns and computer modeling study, together with pore size distribution analysis confirm that each of the resulting COFs exhibits two distinctively different hexagonal pores. The structures were characterized by FT-IR, solid state (13)C NMR, gas adsorption, SEM, TEM, and theoretical simulations. Such rational design and synthetic strategy provide new possibilities for preparing highly ordered porous polymers with heterogeneous pore structures. PMID:26478274

  5. Valuation-Based Framework for Considering Distributed Generation Photovoltaic Tariff Design: Preprint

    SciTech Connect

    Zinaman, O. R.; Darghouth, N. R.

    2015-02-01

    While an export tariff is only one element of a larger regulatory framework for distributed generation, we choose to focus on tariff design because of the significant impact this program design component has on the various flows of value among power sector stakeholders. In that context, this paper is organized into a series of steps that can be taken during the design of a DGPV export tariff design. To that end this paper outlines a holistic, high-level approach to the complex undertaking of DGPV tariff design, the crux of which is an iterative cost-benefit analysis process. We propose a multi-step progression that aims to promote transparent, focused, and informed dialogue on CBA study methodologies and assumptions. When studies are completed, the long-run marginal avoided cost of the DGPV program should be compared against the costs imposed on utilities and non-participating customers, recognizing that these can be defined differently depending on program objectives. The results of this comparison can then be weighed against other program objectives to formulate tariff options. Potential changes to tariff structures can be iteratively fed back into established analytical tools to inform further discussions.

  6. Laboratory evaluation of dynamic traffic assignment systems: Requirements, framework, and system design

    SciTech Connect

    Miaou, S.-P.; Pillai, R.S.; Summers, M.S.; Rathi, A.K.; Lieu, H.C.

    1997-01-01

    The success of Advanced Traveler Information 5ystems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dynamic Traffic Assignment (DTA) systems are being developed to provide the required timely information. The DTA systems will provide faithful and coherent real-time, pre-trip, and en-route guidance/information which includes routing, mode, and departure time suggestions for use by travelers, ATIS, and ATMS. To ensure the credibility and deployment potential of such DTA systems, an evaluation system supporting all phases of DTA system development has been designed and presented in this paper. This evaluation system is called the DTA System Laboratory (DSL). A major component of the DSL is a ground- truth simulator, the DTA Evaluation System (DES). The DES is envisioned to be a virtual representation of a transportation system in which ATMS and ATIS technologies are deployed. It simulates the driving and decision-making behavior of travelers in response to ATIS and ATMS guidance, information, and control. This paper presents the major evaluation requirements for a DTA Systems, a modular modeling framework for the DES, and a distributed DES design. The modeling framework for the DES is modular, meets the requirements, can be assembled using both legacy and independently developed modules, and can be implemented as a either a single process or a distributed system. The distributed design is extendible, provides for the optimization of distributed performance, and object-oriented design within each distributed component. A status report on the development of the DES and other research applications is also provided.

  7. Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer

    NASA Technical Reports Server (NTRS)

    Jamot, Robert F.; Monroe, Ryan M.

    2012-01-01

    With present concern for ecological sustainability ever increasing, it is desirable to model the composition of Earth s upper atmosphere accurately with regards to certain helpful and harmful chemicals, such as greenhouse gases and ozone. The microwave limb sounder (MLS) is an instrument designed to map the global day-to-day concentrations of key atmospheric constituents continuously. One important component in MLS is the spectrometer, which processes the raw data provided by the receivers into frequency-domain information that cannot only be transmitted more efficiently, but also processed directly once received. The present-generation spectrometer is fully analog. The goal is to include a fully digital spectrometer in the next-generation sensor. In a digital spectrometer, incoming analog data must be converted into a digital format, processed through a Fourier transform, and finally accumulated to reduce the impact of input noise. While the final design will be placed on an application specific integrated circuit (ASIC), the building of these chips is prohibitively expensive. To that end, this design was constructed on a field-programmable gate array (FPGA). A family of state-of-the-art digital Fourier transform spectrometers has been developed, with a combination of high bandwidth and fine resolution. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved analog-to-digital converters (ADCs). This 6-Gsps (gigasample per second) digital representation of the analog signal is then processed through an FPGA-based streaming fast Fourier transform (FFT). Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers.

  8. A Framework of Working Across Disciplines in Early Design and R&D of Large Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.

    2015-01-01

    This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.

  9. Design of a digital beam attenuation system for computed tomography: Part I. System design and simulation framework

    SciTech Connect

    Szczykutowicz, Timothy P.; Mistretta, Charles A.

    2013-02-15

    Purpose: The purpose of this work is to introduce a new device that allows for patient-specific imaging-dose modulation in conventional and cone-beam CT. The device is called a digital beam attenuator (DBA). The DBA modulates an x-ray beam by varying the attenuation of a set of attenuating wedge filters across the fan angle. The ability to modulate the imaging dose across the fan beam represents another stride in the direction of personalized medicine. With the DBA, imaging dose can be tailored for a given patient anatomy, or even tailored to provide signal-to-noise ratio enhancement within a region of interest. This modulation enables decreases in: dose, scatter, detector dynamic range requirements, and noise nonuniformities. In addition to introducing the DBA, the simulation framework used to study the DBA under different configurations is presented. Finally, a detailed study on the choice of the material used to build the DBA is presented. Methods: To change the attenuator thickness, the authors propose to use an overlapping wedge design. In this design, for each wedge pair, one wedge is held stationary and another wedge is moved over the stationary wedge. The composite thickness of the two wedges changes as a function of the amount of overlap between the wedges. To validate the DBA concept and study design changes, a simulation environment was constructed. The environment allows for changes to system geometry, different source spectra, DBA wedge design modifications, and supports both voxelized and analytic phantom models. A study of all the elements from atomic number 1 to 92 were evaluated for use as DBA filter material. The amount of dynamic range and tube loading for each element were calculated for various DBA designs. Tube loading was calculated by comparing the attenuation of the DBA at its minimum attenuation position to a filtered non-DBA acquisition. Results: The design and parametrization of DBA implemented FFMCT has been introduced. A simulation

  10. Design of a digital beam attenuation system for computed tomography: Part I. System design and simulation framework

    PubMed Central

    Szczykutowicz, Timothy P.; Mistretta, Charles A.

    2013-01-01

    Purpose: The purpose of this work is to introduce a new device that allows for patient-specific imaging-dose modulation in conventional and cone-beam CT. The device is called a digital beam attenuator (DBA). The DBA modulates an x-ray beam by varying the attenuation of a set of attenuating wedge filters across the fan angle. The ability to modulate the imaging dose across the fan beam represents another stride in the direction of personalized medicine. With the DBA, imaging dose can be tailored for a given patient anatomy, or even tailored to provide signal-to-noise ratio enhancement within a region of interest. This modulation enables decreases in: dose, scatter, detector dynamic range requirements, and noise nonuniformities. In addition to introducing the DBA, the simulation framework used to study the DBA under different configurations is presented. Finally, a detailed study on the choice of the material used to build the DBA is presented. Methods: To change the attenuator thickness, the authors propose to use an overlapping wedge design. In this design, for each wedge pair, one wedge is held stationary and another wedge is moved over the stationary wedge. The composite thickness of the two wedges changes as a function of the amount of overlap between the wedges. To validate the DBA concept and study design changes, a simulation environment was constructed. The environment allows for changes to system geometry, different source spectra, DBA wedge design modifications, and supports both voxelized and analytic phantom models. A study of all the elements from atomic number 1 to 92 were evaluated for use as DBA filter material. The amount of dynamic range and tube loading for each element were calculated for various DBA designs. Tube loading was calculated by comparing the attenuation of the DBA at its minimum attenuation position to a filtered non-DBA acquisition. Results: The design and parametrization of DBA implemented FFMCT has been introduced. A simulation

  11. A digitalized silicon microgyroscope based on embedded FPGA.

    PubMed

    Xia, Dunzhu; Yu, Cheng; Wang, Yuliang

    2012-01-01

    This paper presents a novel digital miniaturization method for a prototype silicon micro-gyroscope (SMG) with the symmetrical and decoupled structure. The schematic blocks of the overall system consist of high precision analog front-end interface, high-speed 18-bit analog to digital convertor, a high-performance core Field Programmable Gate Array (FPGA) chip and other peripherals such as high-speed serial ports for transmitting data. In drive mode, the closed-loop drive circuit are implemented by automatic gain control (AGC) loop and software phase-locked loop (SPLL) based on the Coordinated Rotation Digital Computer (CORDIC) algorithm. Meanwhile, the sense demodulation module based on varying step least mean square demodulation (LMSD) are addressed in detail. All kinds of algorithms are simulated by Simulink and DSPbuilder tools, which is in good agreement with the theoretical design. The experimental results have fully demonstrated the stability and flexibility of the system. PMID:23201990

  12. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  13. A Digitalized Silicon Microgyroscope Based on Embedded FPGA

    PubMed Central

    Xia, Dunzhu; Yu, Cheng; Wang, Yuliang

    2012-01-01

    This paper presents a novel digital miniaturization method for a prototype silicon micro-gyroscope (SMG) with the symmetrical and decoupled structure. The schematic blocks of the overall system consist of high precision analog front-end interface, high-speed 18-bit analog to digital convertor, a high-performance core Field Programmable Gate Array (FPGA) chip and other peripherals such as high-speed serial ports for transmitting data. In drive mode, the closed-loop drive circuit are implemented by automatic gain control (AGC) loop and software phase-locked loop (SPLL) based on the Coordinated Rotation Digital Computer (CORDIC) algorithm. Meanwhile, the sense demodulation module based on varying step least mean square demodulation (LMSD) are addressed in detail. All kinds of algorithms are simulated by Simulink and DSPbuilder tools, which is in good agreement with the theoretical design. The experimental results have fully demonstrated the stability and flexibility of the system. PMID:23201990

  14. FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification

    PubMed Central

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640

  15. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  16. Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping

    SciTech Connect

    Ceriani, Marco; Palermo, Gianluca; Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2013-04-29

    We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantage from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.

  17. 3D model acquisition, design, planning, and manufacturing of orthopaedic devices: a framework

    NASA Astrophysics Data System (ADS)

    Kidder, Justin R.; Mason, Emily; Nnaji, Bartholomew O.

    1996-12-01

    Design and manufacture of orthopedic devices using rapid prototyping technologies has been until recently a highly iterative process that involves multiple users, including doctors, design engineers and rapid prototyping experts. Existing systems for creation of orthopedic parts through rapid prototyping do not follow the principles of concurrent engineering and design for manufacture. This leads to excessive communication between parties and delays in product realization time. In this paper, we lay out the framework for a unified expert system that will enable a doctor to create quickly and easily fully functional prosthetics and orthopedic implants. Necessary components of the model acquisition process should include volumetric segmentation of objects from a CT or MRI dataset and NURBS surface fitting to the boundary points. Finite element analysis and surface model modification modules are also needed, but should be provided in an intuitive fashion for doctors who are not experienced in computer aided design. Preprocessing for rapid prototype building should be automatic, and should include optimal orientation, support structure generation and build simulation modules. Finally, the model should be passed to the rapid prototyping machine in a presliced format for speed and accuracy.

  18. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  19. FPGA-based RF spectrum merging and adaptive hopset selection

    NASA Astrophysics Data System (ADS)

    McLean, R. K.; Flatley, B. N.; Silvius, M. D.; Hopkinson, K. M.

    The radio frequency (RF) spectrum is a limited resource. Spectrum allotment disputes stem from this scarcity as many radio devices are confined to a fixed frequency or frequency sequence. One alternative is to incorporate cognition within a reconfigurable radio platform, therefore enabling the radio to adapt to dynamic RF spectrum environments. In this way, the radio is able to actively sense the RF spectrum, decide, and act accordingly, thereby sharing the spectrum and operating in more flexible manner. In this paper, we present a novel solution for merging many distributed RF spectrum maps into one map and for subsequently creating an adaptive hopset. We also provide an example of our system in operation, the result of which is a pseudorandom adaptive hopset. The paper then presents a novel hardware design for the frequency merger and adaptive hopset selector, both of which are written in VHDL and implemented as a custom IP core on an FPGA-based embedded system using the Xilinx Embedded Development Kit (EDK) software tool. The design of the custom IP core is optimized for area, and it can process a high-volume digital input via a low-latency circuit architecture. The complete embedded system includes the Xilinx PowerPC microprocessor, UART serial connection, and compact flash memory card IP cores, and our custom map merging/hopset selection IP core, all of which are targeted to the Virtex IV FPGA. This system is then incorporated into a cognitive radio prototype on a Rice University Wireless Open Access Research Platform (WARP) reconfigurable radio.

  20. Analog and digital FPGA implementation of BRIN for optimization problems.

    PubMed

    Ng, H S; Lam, K P

    2003-01-01

    The binary relation inference network (BRIN) shows promise in obtaining the global optimal solution for optimization problem, which is time independent of the problem size. However, the realization of this method is dependent on the implementation platforms. We studied analog and digital FPGA implementation platforms. Analog implementation of BRIN for two different directed graph problems is studied. As transitive closure problems can transform to a special case of shortest path problems or a special case of maximum spanning tree problems, two different forms of BRIN are discussed. Their circuits using common analog integrated circuits are investigated. The BRIN solution for critical path problems is expressed and is implemented using the separated building block circuit and the combined building block circuit. As these circuits are different, the response time of these networks will be different. The advancement of field programmable gate arrays (FPGAs) in recent years, allowing millions of gates on a single chip and accompanying with high-level design tools, has allowed the implementation of very complex networks. With this exemption on manual circuit construction and availability of efficient design platform, the BRIN architecture could be built in a much more efficient way. Problems on bandwidth are removed by taking all previous external connections to the inside of the chip. By transforming BRIN to FPGA (Xilinx XC4010XL and XCV800 Virtex), we implement a synchronous network with computations in a finite number of steps. Two case studies are presented, with correct results verified from simulation implementation. Resource consumption on FPGAs is studied showing that Virtex devices are more suitable for the expansion of network in future developments. PMID:18244587

  1. Multiscale Simulation as a Framework for the Enhanced Design of Nanodiamond-Polyethylenimine-based Gene Delivery

    PubMed Central

    Kim, Hansung; Man, Han Bin; Saha, Biswajit; Kopacz, Adrian M.; Lee, One-Sun; Schatz, George C.; Ho, Dean; Liu, Wing Kam

    2012-01-01

    Nanodiamonds (NDs) are emerging carbon platforms with promise as gene/drug delivery vectors for cancer therapy. Specifically, NDs functionalized with the polymer polyethylenimine (PEI) can transfect small interfering RNAs (siRNA) in vitro with high efficiency and low cytotoxicity. Here we present a modeling framework to accurately guide the design of ND-PEI gene platforms and elucidate binding mechanisms between ND, PEI, and siRNA. This is among the first ND simulations to comprehensively account for ND size, charge distribution, surface functionalization, and graphitization. The simulation results are compared with our experimental results both for PEI loading onto NDs and for siRNA (C-myc) loading onto ND-PEI for various mixing ratios. Remarkably, the model is able to predict loading trends and saturation limits for PEI and siRNA, while confirming the essential role of ND surface functionalization in mediating ND-PEI interactions. These results demonstrate that this robust framework can be a powerful tool in ND platform development, with the capacity to realistically treat other nanoparticle systems. PMID:23304428

  2. A framework for the design and development of physical employment tests and standards.

    PubMed

    Payne, W; Harvey, J

    2010-07-01

    Because operational tasks in the uniformed services (military, police, fire and emergency services) are physically demanding and incur the risk of injury, employment policy in these services is usually competency based and predicated on objective physical employment standards (PESs) based on physical employment tests (PETs). In this paper, a comprehensive framework for the design of PETs and PESs is presented. Three broad approaches to physical employment testing are described and compared: generic predictive testing; task-related predictive testing; task simulation testing. Techniques for the selection of a set of tests with good coverage of job requirements, including job task analysis, physical demands analysis and correlation analysis, are discussed. Regarding individual PETs, theoretical considerations including measurability, discriminating power, reliability and validity, and practical considerations, including development of protocols, resource requirements, administrative issues and safety, are considered. With regard to the setting of PESs, criterion referencing and norm referencing are discussed. STATEMENT OF RELEVANCE: This paper presents an integrated and coherent framework for the development of PESs and hence provides a much needed theoretically based but practically oriented guide for organisations seeking to establish valid and defensible PESs. PMID:20582767

  3. Settings for health promotion: an analytic framework to guide intervention design and implementation.

    PubMed

    Poland, Blake; Krupa, Gene; McCall, Douglas

    2009-10-01

    Taking a settings approach to health promotion means addressing the contexts within which people live, work, and play and making these the object of inquiry and intervention as well as the needs and capacities of people to be found in different settings. This approach can increase the likelihood of success because it offers opportunities to situate practice in its context. Members of the setting can optimize interventions for specific contextual contingencies, target crucial factors in the organizational context influencing behavior, and render settings themselves more health promoting. A number of attempts have been made to systematize evidence regarding the effectiveness of interventions in different types of settings (e.g., school-based health promotion, community development). Few, if any, attempts have been made to systematically develop a template or framework for analyzing those features of settings that should influence intervention design and delivery. This article lays out the core elements of such a framework in the form of a nested series of questions to guide analysis. Furthermore, it offers advice on additional considerations that should be taken into account when operationalizing a settings approach in the field. PMID:19809004

  4. FPGA ROM Code for Very Large FIFO Control

    Energy Science and Technology Software Center (ESTSC)

    1995-02-22

    The code is used to program a Field Programmable Gate Array (FPGA) controls a 4 megabit FIFO so that a set delay from input to output is maintained. The FPGA is also capable of inserting errors into the data flow in a controlled manner.

  5. XDELAY. FPGA ROM Code for Very Large FIFO Control

    SciTech Connect

    Pratt, T.J.

    1994-01-01

    The code is used to program a Field Programmable Gate Array (FPGA) controls a 4 megabit FIFO so that a set delay from input to output is maintained. The FPGA is also capable of inserting errors into the data flow in a controlled manner.

  6. Advanced image processing package for FPGA-based re-programmable miniature electronics

    NASA Astrophysics Data System (ADS)

    Ovod, Vladimir I.; Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.

    2005-05-01

    Nova Sensors produces miniature electronics for a variety of real-time digital video camera systems, including foveal sensors based on Nova's Variable Acuity Superpixel Imager (VASITM) technology. An advanced image-processing package has been designed at Nova Sensors to re-configure the FPGA-based co-processor board for numerous applications including motion detection, optical, background velocimetry and target tracking. Currently, the processing package consists of 14 processing operations that cover a broad range of point- and area-applied algorithms. Flexible FPGA designs of these operations and re-programmability of the processing board allows for easy updates of the VASITM sensors, and for low-cost customization of VASITM sensors taking into account specific customer requirements. This paper describes the image processing algorithms implemented and verified in Xilinx FPGAs and provides the major technical performances with figures illustrating practical applications of the processing package.

  7. Study on FPGA SEU Mitigation for the Readout Electronics of DAMPE BGO Calorimeter in Space

    NASA Astrophysics Data System (ADS)

    Shen, Zhongtao; Feng, Changqing; Gao, Shanshan; Zhang, Deliang; Jiang, Di; Liu, Shubin; An, Qi

    2015-06-01

    The BGO calorimeter, which provides a wide measurement range of the primary cosmic ray spectrum, is a key sub-detector of Dark Matter Particle Explorer (DAMPE). The readout electronics of calorimeter consists of 16 pieces of Actel ProASIC Plus FLASH-based FPGA, of which the design-level flip-flops and embedded block RAMs are single event upset (SEU) sensitive in the harsh space environment. Therefore to comply with radiation hardness assurance (RHA), SEU mitigation methods, including partial triple modular redundancy (TMR), CRC checksum, and multi-domain reset are analyzed and tested by the heavy-ion beam test. Composed of multi-level redundancy, a FPGA design with the characteristics of SEU tolerance and low resource consumption is implemented for the readout electronics.

  8. Research on acceleration method of reactor physics based on FPGA platforms

    SciTech Connect

    Li, C.; Yu, G.; Wang, K.

    2013-07-01

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)

  9. FPGA-Based Digital Current Switching Power Amplifiers Used in Magnetic Bearing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Zhang, Kai; Dong, Jinping

    For a traditional two-level current switching power amplifier (PA) used in a magnetic bearing system, its current ripple is obvious. To increase its current ripple performance, three-level amplifiers are designed and their current control is generally based on analog and logical circuits. So the required hardware is complex and a performance increase from the hardware adjustment is difficult. To solve this problem, a FPGA-based digital current switching power amplifier (DCSPA) was designed. Its current ripple was obviously smaller than a two-level amplifier and its control circuit was much simpler than a tri-level amplifier with an analog control circuit. Because of the field-programmable capability of a FPGA chip used, different control algorithms including complex nonlinear algorithms could be easily implemented in the amplifier and their effects could be compared with the same hardware.

  10. Computational Design of Metal-Organic Frameworks with High Methane Deliverable Capacity

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Deem Team; Haranczyk Team; Smit Team

    Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, etc. Intensive efforts are being made to develop new MOFs with desirable properties both experimentally and computationally in the past decades. To guide experimental synthesis with limited throughput, we develop a computational methodology to explore MOFs with high methane deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and evolves a population of MOFs with desirable property efficiently. We identify about 500 MOFs with higher deliverable capacity than MOF-5 in 10 networks. We also investigate the relationship between deliverable capacity and internal surface area of MOFs. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. DE-FG02- 12ER16362.

  11. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    PubMed

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department. PMID:23565356

  12. A framework for the design of a novel haptic-based medical training simulator.

    PubMed

    Tahmasebi, Amir M; Hashtrudi-Zaad, Keyvan; Thompson, David; Abolmaesumi, Purang

    2008-09-01

    This paper presents a framework for the design of a haptic-based medical ultrasound training simulator. The proposed simulator is composed of a PHANToM haptic device and a modular software package that allows for visual feedback and kinesthetic interactions between an operator and multimodality image databases. The system provides real-time ultrasound images in the same fashion as a typical ultrasound machine, enhanced with corresponding augmented computerized tomographic (CT) and/or MRI images. The proposed training system allows trainees to develop radiology techniques and knowledge of the patient's anatomy with minimum practice on live patients, or in places or at times when radiology devices or patients with rare cases may not be available. Low-level details of the software structure that can be migrated to other similar medical simulators are described. A preliminary human factors study, conducted on the prototype of the developed simulator, demonstrates the potential usage of the system for clinical training. PMID:18779081

  13. A novel integrated framework and improved methodology of computer-aided drug design.

    PubMed

    Chen, Calvin Yu-Chian

    2013-01-01

    Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates. PMID:23651478

  14. A General Design Framework for MIMO Wireless Energy Transfer With Limited Feedback

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Zhang, Rui

    2016-05-01

    Multi-antenna or multiple-input multiple-output (MIMO) technique can significantly improve the efficiency of radio frequency (RF) signal enabled wireless energy transfer (WET). To fully exploit the energy beamforming gain at the energy transmitter (ET), the knowledge of channel state information (CSI) is essential, which, however, is difficult to be obtained in practice due to the hardware limitation of the energy receiver (ER). To overcome this difficulty, under a point-to-point MIMO WET setup, this paper proposes a general design framework for a new type of channel learning method based on the ER's energy measurement and feedback. Specifically, the ER measures and encodes the harvested energy levels over different training intervals into bits, and sends them to the ET via a feedback link of limited rate. Based on the energy-level feedback, the ET adjusts transmit beamforming in subsequent training intervals and obtains refined estimates of the MIMO channel by leveraging the technique of analytic center cutting plane method (ACCPM) in convex optimization. Under this general design framework, we further propose two specific feedback schemes termed energy quantization and energy comparison, where the feedback bits at each interval are generated at the ER by quantizing the measured energy level at the current interval and comparing it with those in the previous intervals, respectively. Numerical results are provided to compare the performance of the two feedback schemes. It is shown that energy quantization performs better when the number of feedback bits per interval is large, while energy comparison is more effective with small number of feedback bits.

  15. Passive Tomography for Spent Fuel Verification: Analysis Framework and Instrument Design Study

    SciTech Connect

    White, Timothy A.; Svard, Staffan J.; Smith, Leon E.; Mozin, Vladimir V.; Jansson, Peter; Davour, Anna; Grape, Sophie; Trellue, H.; Deshmukh, Nikhil S.; Wittman, Richard S.; Honkamaa, Tapani; Vaccaro, Stefano; Ely, James

    2015-05-18

    The potential for gamma emission tomography (GET) to detect partial defects within a spent nuclear fuel assembly is being assessed through a collaboration of Support Programs to the International Atomic Energy Agency (IAEA). In the first phase of this study, two safeguards verification objectives have been identified. The first is the independent determination of the number of active pins that are present in the assembly, in the absence of a priori information. The second objective is to provide quantitative measures of pin-by-pin properties, e.g. activity of key isotopes or pin attributes such as cooling time and relative burnup, for the detection of anomalies and/or verification of operator-declared data. The efficacy of GET to meet these two verification objectives will be evaluated across a range of fuel types, burnups, and cooling times, and with a target interrogation time of less than 60 minutes. The evaluation of GET viability for safeguards applications is founded on a modelling and analysis framework applied to existing and emerging GET instrument designs. Monte Carlo models of different fuel types are used to produce simulated tomographer responses to large populations of “virtual” fuel assemblies. Instrument response data are processed by a variety of tomographic-reconstruction and image-processing methods, and scoring metrics specific to each of the verification objectives are defined and used to evaluate the performance of the methods. This paper will provide a description of the analysis framework and evaluation metrics, example performance-prediction results, and describe the design of a “universal” GET instrument intended to support the full range of verification scenarios envisioned by the IAEA.

  16. Decoding the "CoDe": A Framework for Conceptualizing and Designing Help Options in Computer-Based Second Language Listening

    ERIC Educational Resources Information Center

    Cardenas-Claros, Monica Stella; Gruba, Paul A.

    2013-01-01

    This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…

  17. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  18. Single event upset susceptibility testing of the Xilinx Virtex II FPGA

    NASA Technical Reports Server (NTRS)

    Yui, C.; Swift, G.; Carmichael, C.

    2002-01-01

    Heavy ion testing of the Xilinx Virtex IZ was conducted on the configuration, block RAM and user flip flop cells to determine their single event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA is used to reveal L1/e values and single-event-functional interrupt failures.

  19. FPGA based control system for space instrumentation

    NASA Astrophysics Data System (ADS)

    Di Giorgio, Anna M.; Cerulli Irelli, Pasquale; Nuzzolo, Francesco; Orfei, Renato; Spinoglio, Luigi; Liu, Giovanni S.; Saraceno, Paolo

    2008-07-01

    The prototype for a general purpose FPGA based control system for space instrumentation is presented, with particular attention to the instrument control application software. The system HW is based on the LEON3FT processor, which gives the flexibility to configure the chip with only the necessary HW functionalities, from simple logic up to small dedicated processors. The instrument control SW is developed in ANSI C and for time critical (<10μs) commanding sequences implements an internal instructions sequencer, triggered via an interrupt service routine based on a HW high priority interrupt.

  20. Formulation of a parametric systems design framework for disaster response planning

    NASA Astrophysics Data System (ADS)

    Mma, Stephanie Weiya

    The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development

  1. Calculation angle and amplitude spectrum of interferogram with FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Ding, Lei

    2013-08-01

    Historically,computationally-intensive data processing for space-borne instruments has heavily relied on groundbased processing system.But with recent advances in FPGAs such as Xilinx Virtex-4 and Virtex-5 series devices that including PowerPC processors and DSP blocks thereby provding a flexible hardware and software co-design architecture to meet computationally-intensive data processing need,So it is able to shift more processing on- board;for high data active and passive instruments,such as interferometer,Implementations of on-board processing algorithms to perform lossless data reductions can dramatically reduce the data rates,therefore relaxing the downlink data bandwidth requirements.The interferograms are performs the inverse fourier transform on-board in order to decrease the transmission rate.In [Revercomb et al.] paper show that only use the modulus of the complx spectrum will lead to big calibration errors.So the amplitude and angle of the complex spectrum is need for radiometric cablibration,but there have a big challenge for on board obtained the amplitude and angle of the complex spectrum.In this paper,we introduce the CORDIC algorithm to slove it. The CORDIC algorithm is an iterative convergence algorithm that performs a rotation iteratively using a series of specific incremental rotation angles selected so that each iteration is performed by shift and add operation,which fit for FPGA implementation,and can be parallel in a chip to fullfill different latency and throughput.Implemention results with Xilinx FPGA are summarized.

  2. FPGA Based High Performance Computing

    SciTech Connect

    Bennett, Dave; Mason, Jeff; Sundararajan, Prasanna; Dellinger, Erik; Putnam, Andrew; Storaasli, Olaf O

    2008-01-01

    Current high performance computing (HPC) applications are found in many consumer, industrial and research fields. From web searches to auto crash simulations to weather predictions, these applications require large amounts of power by the compute farms and supercomputers required to run them. The demand for more and faster computation continues to increase along with an even sharper increase in the cost of the power required to operate and cool these installations. The ability of standard processor based systems to address these needs has declined in both speed of computation and in power consumption over the past few years. This paper presents a new method of computation based upon programmable logic as represented by Field Programmable Gate Arrays (FPGAs) that addresses these needs in a manner requiring only minimal changes to the current software design environment.

  3. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  4. A Systems Engineering Framework for Design, Construction and Operation of the Next Generation Nuclear Plant

    SciTech Connect

    Edward J. Gorski; Charles V. Park; Finis H. Southworth

    2004-06-01

    Not since the International Space Station has a project of such wide participation been proposed for the United States. Ten countries, the European Union, universities, Department of Energy (DOE) laboratories, and industry will participate in the research and development, design, construction and/or operation of the fourth generation of nuclear power plants with a demonstration reactor to be built at a DOE site and operational by the middle of the next decade. This reactor will be like no other. The Next Generation Nuclear Plant (NGNP) will be passively safe, economical, highly efficient, modular, proliferation resistant, and sustainable. In addition to electrical generation, the NGNP will demonstrate efficient and cost effective generation of hydrogen to support the President’s Hydrogen Initiative. To effectively manage this multi-organizational and technologically complex project, systems engineering techniques and processes will be used extensively to ensure delivery of the final product. The technological and organizational challenges are complex. Research and development activities are required, material standards require development, hydrogen production, storage and infrastructure requirements are not well developed, and the Nuclear Regulatory Commission may further define risk-informed/performance-based approach to licensing. Detailed design and development will be challenged by the vast cultural and institutional differences across the participants. Systems engineering processes must bring the technological and organizational complexity together to ensure successful product delivery. This paper will define the framework for application of systems engineering to this $1.5B - $1.9B project.

  5. The effect of zirconia framework design on the failure of all-ceramic crown under static loading

    PubMed Central

    Taenguthai, Pakamard

    2015-01-01

    PURPOSE This in vitro study aimed to compare the failure load and failure characteristics of two different zirconia framework designs of premolar crowns when subjected to static loading. MATERIALS AND METHODS Two types of zirconia frameworks, conventional 0.5 mm even thickness framework design (EV) and 0.8 mm cutback of full contour crown anatomy design (CB), were made for 10 samples each. The veneer porcelain was added on under polycarbonate shell crown made by vacuum of full contour crown to obtain the same total thickness of the experiment crowns. The crowns were cemented onto the Cobalt-Chromium die. The dies were tilted 45 degrees from the vertical plane to obtain the shear force to the cusp when loading. All crowns were loaded at the lingual incline of the buccal cusp until fracture using a universal testing machine with cross-head speed 0.5 mm/min. The load to fracture values (N) was recorded and statistically analyzed by independent sample t-test. RESULTS The mean and standard deviations of the failure load were 1,170.1 ± 90.9 N for EV design and 1,450.4 ± 175.7 N for CB design. A significant difference in the compressive failure load was found (P<.05). For the failure characteristic, the EV design was found only cohesive failures within veneering porcelain, while the CB design found more failures through the zirconia framework (8 from 10 samples). CONCLUSION There was a significant difference in the failure load between two designs, and the design of the framework influences failure characteristic of zirconia crown. PMID:25932313

  6. CORDIC algorithms for SVM FPGA implementation

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamel Rivera, Horacio; Jiménez, Matías

    2010-04-01

    Support Vector Machines are currently one of the best classification algorithms used in a wide number of applications. The ability to extract a classification function from a limited number of learning examples keeping in the structural risk low has demonstrated to be a clear alternative to other neural networks. However, the calculations involved in computing the kernel and the repetition of the process for all support vectors in the classification problem are certainly intensive, requiring time or power consumption in order to function correctly. This problem could be a drawback in certain applications with limited resources or time. Therefore simple algorithms circumventing this problem are needed. In this paper we analyze an FPGA implementation of a SVM which uses a CORDIC algorithm for simplifying the calculation of as specific kernel greatly reducing the time and hardware requirements needed for the classification, allowing for powerful in-field portable applications. The algorithm is and its calculation capabilities are shown. The full SVM classifier using this algorithm is implemented in an FPGA and its in-field use assessed for high speed low power classification.

  7. Design of a leaching test framework for coal fly ash accounting for environmental conditions.

    PubMed

    Zandi, Mohammad; Russell, Nigel V

    2007-08-01

    Fly ash from coal combustion contains trace elements which, on disposal or utilisation, may leach out, and therefore be a potential environmental hazard. Environmental conditions have a great impact on the mobility of fly ash constituents as well as the physical and chemical properties of the fly ash. Existing standard leaching methods have been shown to be inadequate by not representing possible disposal or utilisation scenarios. These tests are often criticised on the grounds that the results estimated are not reliable as they are not able to be extrapolated to the application scenario. In order to simulate leaching behaviour of fly ash in different environmental conditions and to reduce deviation between measurements in the fields and the laboratories, it is vital to study sensitivity of the fly ash constituents of interest to major factors controlling leachability. pH, liquid-to-solid ratio, leaching time, leachant type and redox potential are parameters affecting stability of elements in the fly ash. Sensitivity of trace elements to pH and liquid to solid ratio (as two major overriding factors) has been examined. Elements have been classified on the basis of their leaching behaviour under different conditions. Results from this study have been used to identify leaching mechanisms. Also the fly ash has been examined under different standard batch leaching tests in order to evaluate and to compare these tests. A Leaching Test Framework has been devised for assessing the stability of trace elements from fly ashes in different environments. This Framework assists in designing more realistic batch leaching tests appropriate to field conditions and can support the development of regulations and protocols for the management and disposal of coal combustion by-products or other solid wastes of environmental concern. PMID:17171257

  8. A FPGA-based automatic bridge over water recognition in high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Beulig, Sebastian; von Schönermark, Maria; Huber, Felix

    2012-11-01

    In this paper a novel algorithm for recognizing bridges over water is presented. The algorithm is designed to run on a small reconfigurable microchip, a so called Field Programmable Gate Array (FPGA). Hence, the algorithm is computationally lightweight and high processing speeds can be reached. Furthermore no a-priory knowledge about a bridge is necessary. Even bridges with an irregular shape, e.g. with balconies, can be detected. As a result, the center point of the bridge is marked. Due to the low power consumption of the FPGA and the autonomous performance of the algorithm, it is suitable for an image analysis directly on-board of satellites. Meta-data like the coordinates of recognized bridges are immediately available. This could be useful, e.g. in case of a natural hazard, when quick information about the infrastructure is desired by the disaster management. The algorithm as well as experimental results on real satellite images are presented and discussed.

  9. An FPGA-based Doppler Processor for a Spaceborne Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Durden, S. L.; Fischman, M. A.; Johnson, R. A.; Chu, A. J.; Jourdan, M. N.; Tanelli, S.

    2007-01-01

    Measurement of precipitation Doppler velocity by spaceborne radar is complicated by the large velocity of the satellite platform. Even if successive pulses are well correlated, the velocity measurement may be biased if the precipitation target does not uniformly fill the radar footprint. It has been previously shown that the bias in such situations can be reduced if full spectral processing is used. The authors present a processor based on field-programmable gate array (FPGA) technology that can be used for spectral processing of data acquired by future spaceborne precipitation radars. The requirements for and design of the Doppler processor are addressed. Simulation and laboratory test results show that the processor can meet real-time constraints while easily fitting in a single FPGA.

  10. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  11. A Real-Time de novo DNA Sequencing Assembly Platform Based on an FPGA Implementation.

    PubMed

    Hu, Yuanqi; Georgiou, Pantelis

    2016-01-01

    This paper presents an FPGA based DNA comparison platform which can be run concurrently with the sensing phase of DNA sequencing and shortens the overall time needed for de novo DNA assembly. A hybrid overlap searching algorithm is applied which is scalable and can deal with incremental detection of new bases. To handle the incomplete data set which gradually increases during sequencing time, all-against-all comparisons are broken down into successive window-against-window comparison phases and executed using a novel dynamic suffix comparison algorithm combined with a partitioned dynamic programming method. The complete system has been designed to facilitate parallel processing in hardware, which allows real-time comparison and full scalability as well as a decrease in the number of computations required. A base pair comparison rate of 51.2 G/s is achieved when implemented on an FPGA with successful DNA comparison when using data sets from real genomes. PMID:27045828

  12. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  13. A new cellular nonlinear network emulation on FPGA for EEG signal processing in epilepsy

    NASA Astrophysics Data System (ADS)

    Müller, Jens; Müller, Jan; Tetzlaff, Ronald

    2011-05-01

    For processing of EEG signals, we propose a new architecture for the hardware emulation of discrete-time Cellular Nonlinear Networks (DT-CNN). Our results show the importance of a high computational accuracy in EEG signal prediction that cannot be achieved with existing analogue VLSI circuits. The refined architecture of the processing elements and its resource schedule, the cellular network structure with local couplings, the FPGA-based embedded system containing the DT-CNN, and the data flow in the entire system will be discussed in detail. The proposed DT-CNN design has been implemented and tested on an Xilinx FPGA development platform. The embedded co-processor with a multi-threading kernel is utilised for control and pre-processing tasks and data exchange to the host via Ethernet. The performance of the implemented DT-CNN has been determined for a popular example and compared to that of a conventional computer.

  14. A frame-based domain-specific language for rapid prototyping of FPGA-based software-defined radios

    NASA Astrophysics Data System (ADS)

    Ouedraogo, Ganda Stephane; Gautier, Matthieu; Sentieys, Olivier

    2014-12-01

    The field-programmable gate array (FPGA) technology is expected to play a key role in the development of software-defined radio (SDR) platforms. As this technology evolves, low-level designing methods for prototyping FPGA-based applications did not change throughout the decades. In the outstanding context of SDR, it is important to rapidly implement new waveforms to fulfill such a stringent flexibility paradigm. At the current time, different proposals have defined, through software-based approaches, some efficient methods to prototype SDR waveforms in a processor-based running environment. This paper describes a novel design flow for FPGA-based SDR applications. This flow relies upon high-level synthesis (HLS) principles and leverages the nascent HLS tools. Its entry point is a domain-specific language (DSL) which handles the complexity of programming an FPGA and integrates some SDR features so as to enable automatic waveform control generation from a data frame model. Two waveforms (IEEE 802.15.4 and IEEE 802.11a) have been designed and explored via this new methodology, and the results are highlighted in this paper.

  15. A new FPGA with 4/5-input LUT and optimized carry chain

    NASA Astrophysics Data System (ADS)

    Zhidong, Mao; Liguang, Chen; Yuan, Wang; Jinmei, Lai

    2012-07-01

    A new LUT and carry structure embedded in the configurable logic block of an FPGA is proposed. The LUT is designed to support both 4-input and 5-input structures, which can be configured by users according to their needs without increasing interconnect resources. We also develop a new carry chain structure with an optimized critical path. Finally a newly designed configurable scan-chain is inserted. The circuit is fabricated in 0.13 μm 1P8M 1.2/2.5/3.3 V logic CMOS process. The test results show a correct function of 4/5-input LUT and scan-chain, and a speedup in carry performance of nearly 3 times over current architecture in the same technology at the cost of an increase in total area of about 72.5%. Our results also show that the logic utilization of this work is better than that of a Virtex II/Virtex 4/Virtex 5/Virtex 6/Virtex 7 FPGA when implemented using only 4-LUT and better than that of a Virtex II/Virtex 4 FPGA when implemented using only 5-LUT.

  16. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  17. An FPGA-based method for a reconfigurable and compact scanner controller

    NASA Astrophysics Data System (ADS)

    Thomas, J.; Megherbi, D.; Sliney, P.; Pyburn, D.; Sengupta, S.; Khoury, J.; Woods, C.; Kirstead, J.

    2005-08-01

    An essential part of a LADAR system is the scanner component. The physical scanner and its electrical controller must often be as compact as possible to meet the stringent physical requirements of the system. It is also advantageous to have a reconfigurable electrical scanner controller. This can allow real-time automated dynamic modifications to the scanning characteristics. Via reconfiguration, this can also allow a single scanner controller to be used on multiple physical scanners with different resonant frequencies and reflection angles. The most efficient method to construct a compact scanner with static or dcynamic re-configurability is by using an FPGA-based system. FPGAs are extremely compact, reconfigurable, and can be programmed with very complex algorithms. We show here the design and testing of such an FPGA-based system has been designed and tested. We show here this FPGA-based system is able to drive scanners at arbitrary frequencies with different waveforms and produce appropriate horizontal and vertical syncs of arbitrary pulse width. Several programmable constants are provided to allow re-configurability. Additionally we show how very few essential components are required so the system could potentially be compacted to approximately the size of a cell phone.

  18. A low-cost, FPGA-based servo controller with lock-in amplifier

    NASA Astrophysics Data System (ADS)

    Yang, G.; Barry, J. F.; Shuman, E. S.; Steinecker, M. H.; DeMille, D.

    2012-10-01

    We describe the design and implementation of a low-cost, FPGA-based servo controller with an integrated waveform synthesizer and lock-in amplifier. This system has been designed with the specific application of laser frequency locking in mind but should be adaptable to a variety of other purposes as well. The system incorporates an onboard waveform synthesizer, a lock-in amplifier, two channels of proportional-integral (PI) servo control, and a ramp generator on a single FPGA chip. The system is based on an inexpensive, off-the-shelf FPGA evaluation board with a wide variety of available accessories, allowing the system to interface with standard laser controllers and detectors while minimizing the use of custom hardware and electronics. Gains, filter constants, and other relevant parameters are adjustable via onboard knobs and switches. These parameters and other information are displayed to the user via an integrated LCD, allowing full operation of the device without an accompanying computer. We demonstrate the performance of the system in a test setup, in which the frequency of a tunable external-cavity diode laser (ECDL) is locked to a resonant optical transmission peak of a Fabry-Perot cavity. In this setup, we achieve a total servo-loop bandwidth of ~ 7 kHz and achieve locking of the ECDL to the cavity with a full-width-at-half-maximum (FWHM) linewidth of ~ 200 kHz.

  19. A systematic framework for computer-aided design of engineering rubber formulations

    NASA Astrophysics Data System (ADS)

    Ghosh, Prasenjeet

    This thesis considers the design of engineering rubber formulations, whose unique properties of elasticity and resilience enable diverse applications. Engineering rubber formulations are a complex mixture of different materials called curatives that includes elastomers, fillers, crosslinking agents, accelerators, activators, retarders, anti-oxidants and processing aids, where the amount of curatives must be adjusted for each application. The characterization of the final properties of the rubber in application is complex and depends on the chemical interplay between the different curatives in formulation via vulcanization chemistry. The details of the processing conditions and the thermal, deformational, and chemical environment encountered in application also have a pronounced effect on the performance of the rubber. Consequently, for much of the history of rubber as an engineering material, its recipe formulations have been developed largely by trial-and-error, rather than by a fundamental understanding. A computer-aided, systematic and automated framework for the design of such materials is proposed in this thesis. The framework requires the solution to two sub-problems: (a) the forward problem, which involves prediction of the desired properties when the formulation is known and (b) the inverse problem that requires identification of the appropriate formulation, given the desired target properties. As part of the forward model, the chemistry of accelerated sulfur vulcanization is reviewed that permits integration of the knowledge of the past five decades in the literature to answer some old questions, reconcile some of the contradicting mechanisms and present a holistic description of the governing chemistry. Based on this mechanistic chemistry, a fundamental kinetic model is derived using population balance equations. The model quantitatively describes, for the first time, the different aspects of vulcanization chemistry. Subsequently, a novel three

  20. The GBT-FPGA core: features and challenges

    NASA Astrophysics Data System (ADS)

    Barros Marin, M.; Baron, S.; Feger, S. S.; Leitao, P.; Lupu, E. S.; Soos, C.; Vichoudis, P.; Wyllie, K.

    2015-03-01

    Initiated in 2009 to emulate the GBTX (Gigabit Transceiver) serial link and test the first GBTX prototypes, the GBT-FPGA project is now a full library, targeting FPGAs (Field Programmable Gate Array) from Altera and Xilinx, allowing the implementation of one or several GBT links of two different types: "Standard" or "Latency-Optimized". The first major version of this IP Core was released in April 2014. This paper presents the various flavours of the GBT-FPGA kit and focuses on the challenge of providing a fixed and deterministic latency system both for clock and data recovery for all FPGA families.

  1. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW

  2. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    PubMed Central

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays. PMID:26456081

  3. Framework design for remote sensing monitoring and data service system of regional river basins

    NASA Astrophysics Data System (ADS)

    Fu, Jun'e.; Lu, Jingxuan; Pang, Zhiguo

    2015-08-01

    Regional river basins, transboundary rivers in particular, are shared water resources among multiple users. The tempo-spatial distribution and utilization potentials of water resources in these river basins have a great influence on the economic layout and the social development of all the interested parties in these basins. However, due to the characteristics of cross borders and multi-users in these regions, especially across border regions, basic data is relatively scarce and inconsistent, which bring difficulties in basin water resources management. Facing the basic data requirements in regional river management, the overall technical framework for remote sensing monitoring and data service system in China's regional river basins was designed in the paper, with a remote sensing driven distributed basin hydrologic model developed and integrated within the frame. This prototype system is able to extract most of the model required land surface data by multi-sources and multi-temporal remote sensing images, to run a distributed basin hydrological simulation model, to carry out various scenario analysis, and to provide data services to decision makers.

  4. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-10-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays.

  5. Designed synthesis of double-stage two-dimensional covalent organic frameworks.

    PubMed

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays. PMID:26456081

  6. Interchangeability among reference insulin analogues and their biosimilars: regulatory framework, study design and clinical implications.

    PubMed

    Dowlat, H A; Kuhlmann, M K; Khatami, H; Ampudia-Blasco, F J

    2016-08-01

    Biosimilars are regulated differently from small-molecule generic, chemically derived medicines. The complexity of biological products means that small changes in manufacturing or formulation may result in changes in efficacy and safety of the final product. In the face of this complexity, the regulatory landscape for biosimilars continues to evolve, and global harmonization regarding requirements is currently lacking. It is essential that clinicians and patients are reassured that biosimilars are equally safe and effective as their reference product, and this is particularly important when interchangeability, defined as 'changing one medicine for another one which is expected to achieve the same clinical effect in a given clinical setting in any one patient', is considered. Although the automatic substitution (i.e. substitution without input from the prescribing healthcare provider) of biosimilars for reference products is currently not permitted by the majority of countries, this may change in the future. In order to demonstrate interchangeability between reference products and a biosimilar, more stringent and specific studies of the safety and efficacy of biosimilars are likely to be needed; however, guidance on the design of and the need for any such studies is currently limited. The present article provides an overview of the current regulatory framework around the demonstration of interchangeability with biosimilars, with a specific focus on biosimilar insulin analogues, and details experiences with other biosimilar products. In addition, designs for studies to evaluate interchangeability with a biosimilar insulin analogue product are proposed and a discussion about the implications of interchangeability in clinical practice is included. PMID:27097592

  7. New Li-doped fullerene-intercalated phthalocyanine covalent organic frameworks designed for hydrogen storage.

    PubMed

    Guo, Jing-Hua; Zhang, Hong; Miyamoto, Yoshiyuki

    2013-06-01

    Applying density functional theory (DFT) calculations, we have designed fullerenes (C20, C24, C26, C28, C30, C36, C60 and C70) intercalated phthalocyanine covalent organic frameworks (Cn-Pc-PBBA COFs). First principles molecular dynamics (MD) simulations showed that the structures of Cn-Pc-PBBA COFs are stable at room temperature and even at higher temperature (500 K). The interlayer distance of Pc-PBBA COF has been expanded to 7.48-13.25 Å by the intercalated fullerenes, and the pore volume and surface area were enlarged by 2.3-3.1 and 2.0-2.6 times, respectively. The grand canonical Monte Carlo (GCMC) simulations show that our designed Cn-Pc-PBBA COFs exhibit a superior hydrogen storage capability: at 77 K and P = 100 bar, the hydrogen gravimetric and volumetric uptakes reach 9.4-12 wt% and 48.1-52.2 g L(-1), respectively. To meet the requirement for practical application in hydrogen storage, we use the Li-doping method to modify the hydrogen storage performance of Cn-Pc-PBBA COFs. Our results show that the Li atoms can stably locate on the surface of C30-, C36, C60 and C70-Pc-PBBA COFs. At T = 298 K and P = 100 bar, for these four Li-doped Cn-Pc-PBBA COFs, the gravimetric and volumetric uptakes of H2 reach 4.2 wt% and 18.2 g L(-1), respectively. PMID:23609981

  8. Designing Energy Supply Chains with the P-Graph Framework under Cost Constraints andSustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  9. Design-Grounded Assessment: A Framework and a Case Study of Web 2.0 Practices in Higher Education

    ERIC Educational Resources Information Center

    Ching, Yu-Hui; Hsu, Yu-Chang

    2011-01-01

    This paper synthesis's three theoretical perspectives, including sociocultural theory, distributed cognition, and situated cognition, into a framework to guide the design and assessment of Web 2.0 practices in higher education. In addition, this paper presents a case study of Web 2.0 practices. Thirty-seven online graduate students participated in…

  10. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    ERIC Educational Resources Information Center

    Stolk, Machiel J.; De Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-01-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The…

  11. A Design Framework for Enhancing Engagement in Student-Centered Learning: Own It, Learn It, and Share It

    ERIC Educational Resources Information Center

    Lee, Eunbae; Hannafin, Michael J.

    2016-01-01

    Student-centered learning (SCL) identifies students as the owners of their learning. While SCL is increasingly discussed in K-12 and higher education, researchers and practitioners lack current and comprehensive framework to design, develop, and implement SCL. We examine the implications of theory and research-based evidence to inform those who…

  12. Stego on FPGA: an IWT approach.

    PubMed

    Ramalingam, Balakrishnan; Amirtharajan, Rengarajan; Rayappan, John Bosco Balaguru

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  13. Martian dust devils detector over FPGA

    NASA Astrophysics Data System (ADS)

    de Lucas, E.; Miguel, M. J.; Mozos, D.; Vázquez, L.

    2012-04-01

    Digital applications that must be on-board space missions must comply with a very restrictive set of requirements. These include energy efficiency, small volume and weight, robustness and high performance. Moreover, these circuits cannot be repaired in case of error, so they must be reliable or provide some way to recover from errors. These features make reconfigurable hardware (FPGAs, Field Programmable Gate Arrays) a very suitable technology to be used in space missions. This paper presents a Martian dust devil detector implemented on an FPGA. The results show that a hardware implementation of the algorithm presents very good numbers in terms of performance compared with the software version. Moreover, as the amount of time needed to perform all the computations on the reconfigurable hardware is small, this hardware can be used most of the time to realize other applications.

  14. Martian dust devils detector over FPGA

    NASA Astrophysics Data System (ADS)

    de Lucas, E.; Miguel, M. J.; Mozos, D.; Vázquez, L.

    2011-12-01

    Digital applications that must be on-board of space missions must accomplish a very restrictive set of requirements. These include energy efficiency, small volume and weight, robustness and high performance. Moreover these circuits can not be repaired in case of error, so they must be reliable or provide some way to recover from errors. These features make reconfigurable hardware (FPGAs, Field Programmable Gate Arrays) a very suitable technology to be used in space missions. This paper presents a Martian dust devil detector implemented on a FPGA. The results show that a hardware implementation of the algorithm present very good numbers in terms of performance compared with the software version. Moreover, as the amount of time needed to perform all the computations on the reconfigurable hardware is small, this hardware can be used more of the time to realize other applications.

  15. Defining, Designing for, and Measuring "Social Constructivist Digital Literacy" Development in Learners: A Proposed Framework

    ERIC Educational Resources Information Center

    Reynolds, Rebecca

    2016-01-01

    This paper offers a newly conceptualized modular framework for digital literacy that defines this concept as a task-driven "social constructivist digital literacy," comprising 6 practice domains grounded in Constructionism and social constructivism: Create, Manage, Publish, Socialize, Research, Surf. The framework articulates possible…

  16. Three Dialogs: A Framework for the Analysis and Assessment of Twenty-First-Century Literacy Practices, and Its Use in the Context of Game Design within "Gamestar Mechanic"

    ERIC Educational Resources Information Center

    Games, Ivan Alex

    2008-01-01

    This article discusses a framework for the analysis and assessment of twenty-first-century language and literacy practices in game and design-based contexts. It presents the framework in the context of game design within "Gamestar Mechanic", an innovative game-based learning environment where children learn the Discourse of game design. It…

  17. A Spartan 6 FPGA-based data acquisition system for dedicated imagers in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Loudos, G.; Georgiou, M.; David, S.; Matsopoulos, G.

    2012-12-01

    We present the development of a four-channel low-cost hardware system for data acquisition, with application in dedicated nuclear medicine imagers. A 12 bit octal channel high-speed analogue to digital converter, with up to 65 Msps sampling rate, was used for the digitization of analogue signals. The digitized data are fed into a field programmable gate array (FPGA), which contains an interface to a bank of double data rate 2 (DDR2)-type memory. The FPGA processes the digitized data and stores the results into the DDR2. An ethernet link was used for data transmission to a personal computer. The embedded system was designed using Xilinx's embedded development kit (EDK) and was based on Xilinx's Microblaze soft-core processor. The system has been evaluated using two different discrete optical detector arrays (a position-sensitive photomultiplier tube and a silicon photomultiplier) with two different pixelated scintillator arrays (BGO, LSO:Ce). The energy resolution for both detectors was approximately 25%. A clear identification of all crystal elements was achieved in all cases. The data rate of the system with this implementation can reach 60 Mbits s-1. The results have shown that this FPGA data acquisition system is a compact and flexible solution for single-photon-detection applications. This paper was originally submitted for inclusion in the special feature on Imaging Systems and Techniques 2011.

  18. Novel intelligent real-time position tracking system using FPGA and fuzzy logic.

    PubMed

    Soares dos Santos, Marco P; Ferreira, J A F

    2014-03-01

    The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. PMID:24112645

  19. FPGA for Power Control of MSL Avionics

    NASA Technical Reports Server (NTRS)

    Wang, Duo; Burke, Gary R.

    2011-01-01

    A PLGT FPGA (Field Programmable Gate Array) is included in the LCC (Load Control Card), GID (Guidance Interface & Drivers), TMC (Telemetry Multiplexer Card), and PFC (Pyro Firing Card) boards of the Mars Science Laboratory (MSL) spacecraft. (PLGT stands for PFC, LCC, GID, and TMC.) It provides the interface between the backside bus and the power drivers on these boards. The LCC drives power switches to switch power loads, and also relays. The GID drives the thrusters and latch valves, as well as having the star-tracker and Sun-sensor interface. The PFC drives pyros, and the TMC receives digital and analog telemetry. The FPGA is implemented both in Xilinx (Spartan 3- 400) and in Actel (RTSX72SU, ASX72S). The Xilinx Spartan 3 part is used for the breadboard, the Actel ASX part is used for the EM (Engineer Module), and the pin-compatible, radiation-hardened RTSX part is used for final EM and flight. The MSL spacecraft uses a FC (Flight Computer) to control power loads, relays, thrusters, latch valves, Sun-sensor, and star-tracker, and to read telemetry such as temperature. Commands are sent over a 1553 bus to the MREU (Multi-Mission System Architecture Platform Remote Engineering Unit). The MREU resends over a remote serial command bus c-bus to the LCC, GID TMC, and PFC. The MREU also sends out telemetry addresses via a remote serial telemetry address bus to the LCC, GID, TMC, and PFC, and the status is returned over the remote serial telemetry data bus.

  20. FPGA-accelerated adaptive optics wavefront control

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.

    2014-03-01

    The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.

  1. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data. PMID:25376037

  2. Design and construction of porous metal-organic frameworks based on flexible BPH pillars

    SciTech Connect

    Hao, Xiang-Rong; Yang, Guang-sheng; Shao, Kui-Zhan; Su, Zhong-Min; Yuan, Gang; Wang, Xin-Long

    2013-02-15

    Three metal-organic frameworks (MOFs), [Co{sub 2}(BPDC){sub 2}(4-BPH){center_dot}3DMF]{sub n} (1), [Cd{sub 2}(BPDC){sub 2}(4-BPH){sub 2}{center_dot}2DMF]{sub n} (2) and [Ni{sub 2}(BDC){sub 2}(3-BPH){sub 2} (H{sub 2}O){center_dot}4DMF]{sub n} (3) (H{sub 2}BPDC=biphenyl-4,4 Prime -dicarboxylic acid, H{sub 2}BDC=terephthalic acid, BPH=bis(pyridinylethylidene)hydrazine and DMF=N,N Prime -dimethylformamide), have been solvothermally synthesized based on the insertion of heterogeneous BPH pillars. Framework 1 has 'single-pillared' MOF-5-like motif with inner cage diameters of up to 18.6 A. Framework 2 has 'double pillared' MOF-5-like motif with cage diameters of 19.2 A while 3 has 'double pillared' 8-connected framework with channel diameters of 11.0 A. Powder X-ray diffraction (PXRD) shows that 3 is a dynamic porous framework. - Graphical abstract: By insertion of flexible BPH pillars based on 'pillaring' strategy, three metal-organic frameworks are obtained showing that the porous frameworks can be constructed in a much greater variety. Highlights: Black-Right-Pointing-Pointer Frameworks 1 and 2 have MOF-5 like motif. Black-Right-Pointing-Pointer The cube-like cages in 1 and 2 are quite large, comparable to the IRMOF-10. Black-Right-Pointing-Pointer Framework 1 is 'single-pillared' mode while 2 is 'double-pillared' mode. Black-Right-Pointing-Pointer PXRD and gas adsorption analysis show that 3 is a dynamic porous framework.

  3. Supply-demand analysis a framework for exploring the regulatory design of metabolism.

    PubMed

    Hofmeyr, Jan-Hendrik S; Rohwer, Johann M

    2011-01-01

    The living cell can be thought of as a collection of linked chemical factories, a molecular economy in which the principles of supply and demand obtain. Supply-demand analysis is a framework for exploring and gaining an understanding of metabolic regulation, both theoretically and experimentally, where regulatory performance is measured in terms of flux control and homeostatic maintenance of metabolite concentrations. It is based on a metabolic control analysis of a supply-demand system in steady state in which the degree of flux and concentration control by the supply and demand blocks is related to their local properties, which are quantified as the elasticities of supply and demand. These elasticities can be visualized as the slopes of the log-log rate characteristics of supply and demand. Rate characteristics not only provide insight about system behavior around the steady state but can also be expanded to provide a view of the behavior of the system over a wide range of concentrations of the metabolic intermediate that links the supply and the demand. The theoretical and experimental results of supply-demand analysis paint a picture of the regulatory design of metabolic systems that differs radically from what can be called the classical view of metabolic regulation, which generally explains the role of regulatory mechanisms only in terms of the supply, completely ignoring the demand. Supply-demand analysis has recently been generalized into a computational tool that can be used to study the regulatory behavior of kinetic models of metabolic systems up to genome-scale. PMID:21943913

  4. Cryogenic loss monitors with FPGA TDC signal processing

    SciTech Connect

    Warner, A.; Wu, J.; /Fermilab

    2011-09-01

    Radiation hard helium gas ionization chambers capable of operating in vacuum at temperatures ranging from 5K to 350K have been designed, fabricated and tested and will be used inside the cryostats at Fermilab's Superconducting Radiofrequency beam test facility. The chamber vessels are made of stainless steel and all materials used including seals are known to be radiation hard and suitable for operation at 5K. The chambers are designed to measure radiation up to 30 kRad/hr with sensitivity of approximately 1.9 pA/(Rad/hr). The signal current is measured with a recycling integrator current-to-frequency converter to achieve a required measurement capability for low current and a wide dynamic range. A novel scheme of using an FPGA-based time-to-digital converter (TDC) to measure time intervals between pulses output from the recycling integrator is employed to ensure a fast beam loss response along with a current measurement resolution better than 10-bit. This paper will describe the results obtained and highlight the processing techniques used.

  5. An FPGA computing demo core for space charge simulation

    SciTech Connect

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.

  6. Design of a humidity-stable metal-organic framework using a phosphonate monoester ligand.

    PubMed

    Gelfand, Benjamin S; Lin, Jian-Bin; Shimizu, George K H

    2015-02-16

    Phosphonate monoesters are atypical linkers for metal-organic frameworks, but they offer potentially added versatility. In this work, a bulky isopropyl ester is used to direct the topology of a copper(II) network from a dense to an open framework, CALF-30. CALF-30 shows no adsorption of N2 or CH4 however, using CO2 sorption, CALF-30 was found to have a Langmuir surface area of over 300 m(2)/g and to be stable to conditions of 90% relative humidity at 353 K owing to kinetic shielding of the framework by the phosphonate ester. PMID:25646642

  7. From Human Factors to Human Actors to Human Crafters: A Meta-Design Inspired Participatory Framework for Designing in Use

    ERIC Educational Resources Information Center

    Maceli, Monica Grace

    2012-01-01

    Meta-design theory emphasizes that system designers can never anticipate all future uses of their system at design time, when systems are being developed. Rather, end users shape their environments in response to emerging needs at use time. Meta-design theory suggests that systems should therefore be designed to adapt to future conditions in the…

  8. A software engineering perspective on environmental modeling framework design: The object modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  9. A novel real-time resource efficient implementation of Sobel operator-based edge detection on FPGA

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Saini, Anil K.; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2014-12-01

    A new resource efficient FPGA-based hardware architecture for real-time edge detection using Sobel operator for video surveillance applications has been proposed. The choice of Sobel operator is due to its property to counteract the noise sensitivity of the simple gradient operator. FPGA is chosen for this implementation due to its flexibility to provide the possibility to perform algorithmic changes in later stage of the system development and its capability to provide real-time performance, hard to achieve with general purpose processor or digital signal processor, while limiting the extensive design work, time and cost required for application specific integrated circuit. The proposed architecture uses single processing element for both horizontal and vertical gradient computation for Sobel operator and utilised approximately 38% less FPGA resources as compared to standard Sobel edge detection architecture while maintaining real-time frame rates for high definition videos (1920 × 1080 image sizes). The complete system is implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA board.

  10. Firmware-only implementation of Time-to-Digital Converter (TDC) in Field-Programmable Gate Array (FPGA)

    SciTech Connect

    Jinyuan Wu; Zonghan Shi; Irena Y Wang

    2003-11-07

    A Time-to-Digital Converter (TDC) implemented in general purpose field-programmable gate array (FPGA) for the Fermilab CKM experiment will be presented. The TDC uses a delay chain and register array structure to produce lower bits in addition to higher bits from a clock counter. Lacking the direct controls custom chips, the FPGA implementation of the delay chain and register array structure had to address two major problems: (1) the logic elements used for the delay chain and register array structure must be placed and routed by the FPGA compiler in a predictable manner, to assure uniformity of the TDC binning and short-term stability. (2) The delay variation due to temperature and power supply voltage must be compensated for to assure long-term stability. They used the chain structures in the existing FPGAs that the venders designed for general purpose such as carry algorithm or logic expansion to solve the first problem. To compensate for delay variations, they studied several digital compensation strategies that can be implemented in the same FPGA device. Some bench-top test results will also be presented in this document.

  11. Improved On-Chip Measurement of Delay in an FPGA or ASIC

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Burke, Gary; Sheldon, Douglas

    2007-01-01

    An improved design has been devised for on-chip-circuitry for measuring the delay through a chain of combinational logic elements in a field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC). In the improved design, the delay chain does not include input and output buffers and is not configured as an oscillator. Instead, the delay chain is made part of the signal chain of an on-chip pulse generator. The duration of the pulse is measured on-chip and taken to equal the delay.

  12. Digital Real-Time Multiple Channel Multiple Mode Neutron Flux Estimation on FPGA-based Device

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Barbot, Loïc; Corre, Gwénolé; Woo, Romuald; Destouches, Christophe; Normand, Stéphane

    2016-02-01

    This paper presents a complete custom full-digital instrumentation device that was designed for real-time neutron flux estimation, especially for nuclear reactor in-core measurement using subminiature Fission Chambers (FCs). Entire fully functional small-footprint design (about 1714 LUTs) is implemented on FPGA. It enables real-time acquisition and analysis of multiple channels neutron's flux both in counting mode and Campbelling mode. Experimental results obtained from this brand new device are consistent with simulation results and show good agreement within good uncertainty. This device paves the way for new applications perspectives in real-time nuclear reactor monitoring.

  13. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  14. Narrative Means to Preventative Ends: A Narrative Engagement Framework for Designing Prevention Interventions

    PubMed Central

    Miller-Day, Michelle; Hecht, Michael L.

    2013-01-01

    This paper describes a Narrative Engagement Framework (NEF) for guiding communication-based prevention efforts. This framework suggests that personal narratives have distinctive capabilities in prevention. The paper discusses the concept of narrative, links narrative to prevention, and discusses the central role of youth in developing narrative interventions. As illustration, the authors describe how the NEF is applied in the keepin’ it REAL adolescent drug prevention curriculum, pose theoretical directions, and offer suggestions for future work in prevention communication. PMID:23980613

  15. A framework for community assessment: designing and conducting a survey in a Hispanic immigrant and refugee community.

    PubMed

    Urrutia-Rojas, X; Aday, L A

    1991-03-01

    This article introduces a framework for the study of access to medical care that has been used extensively in national and local surveys, and demonstrates its application to an assessment of health and health care needs in a Hispanic immigrant and refugee community. The presentation of the framework, study design, findings, and implications for research and planning points out the utility of this framework for organizing systematic community assessment data-gathering activities; demonstrates how such an assessment could be incorporated into a public health nursing curriculum or readily adopted by public health nurse professionals in their communities; illustrates the potential for effective partnerships between public health practitioners and academics in conducting and disseminating the findings; and provides a broader conceptual, empirical, and policy-oriented context in which to view local community-assessment activities and their relevance for health policy and program development. PMID:2023852

  16. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel J.; de Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-05-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The framework is based on Galperin's theory of the internalisation of actions and it is operationalised into a professional development programme to empower chemistry teachers for designing new context-based units. The programme consists of the teaching of an educative context-based unit, followed by the designing of an outline of a new context-based unit. Six experienced chemistry teachers participated in the instructional meetings and practical teaching in their respective classrooms. Data were obtained from meetings, classroom discussions, and observations. The findings indicated that teachers became only partially empowered for designing a new context-based chemistry unit. Moreover, the process of professional development leading to teachers' empowerment was not carried out as intended. It is concluded that the elaboration of the framework needs improvement. The implications for a new programme are discussed.

  17. High-Speed, Multi-Channel Serial ADC LVDS Interface for Xilinx Virtex-5 FPGA

    NASA Technical Reports Server (NTRS)

    Taylor, Gregory H.

    2012-01-01

    Analog-to-digital converters (ADCs) are used in scientific and communications instruments on all spacecraft. As data rates get higher, and as the transition is made from parallel ADC designs to high-speed, serial, low-voltage differential signaling (LVDS) designs, the need will arise to interface these in field programmable gate arrays (FPGAs). As Xilinx has released the radiation-hardened version of the Virtex-5, this will likely be used in future missions. High-speed serial ADCs send data at very high rates. A de-serializer instantiated in the fabric of the FPGA could not keep up with these high data rates. The Virtex-5 contains primitives designed specifically for high-speed, source-synchronous de-serialization, but as supported by Xilinx, can only support bitwidths of 10. Supporting bit-widths of 12 or more requires the use of the primitives in an undocumented configuration, a non-trivial task. A new SystemVerilog design was written that is simpler and uses fewer hardware resources than the reference design described in Xilinx Application Note XAPP866. It has been shown to work in a Xilinx XC5VSX24OT connected to a MAXIM MAX1438 12-bit ADC using a 50-MHz sample clock. The design can be replicated in the FPGA for multiple ADCs (four instantiations were used for a total of 28 channels).

  18. Design and Implementation of an Architectural Framework for Web Portals in a Ubiquitous Pervasive Environment

    PubMed Central

    Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol

    2009-01-01

    Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal’s gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world’s largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework. PMID:22346693

  19. FPGA-based artificial neural network using CORDIC modules

    NASA Astrophysics Data System (ADS)

    Liddicoat, Albert A.; Slivovsky, Lynne A.; McLenegan, Tim; Heyer, Don

    2006-08-01

    Artificial neural networks have been used in applications that require complex procedural algorithms and in systems which lack an analytical mathematic model. By designing a large network of computing nodes based on the artificial neuron model, new solutions can be developed for computational problems in fields such as image processing and speech recognition. Neural networks are inherently parallel since each neuron, or node, acts as an autonomous computational element. Artificial neural networks use a mathematical model for each node that processes information from other nodes in the same region. The information processing entails computing a weighted average computation followed by a nonlinear mathematical transformation. Some typical artificial neural network applications use the exponential function or trigonometric functions for the nonlinear transformation. Various simple artificial neural networks have been implemented using a processor to compute the output for each node sequentially. This approach uses sequential processing and does not take advantage of the parallelism of a complex artificial neural network. In this work a hardware-based approach is investigated for artificial neural network applications. A Field Programmable Gate Arrays (FPGAs) is used to implement an artificial neuron using hardware multipliers, adders and CORDIC functional units. In order to create a large scale artificial neural network, area efficient hardware units such as CORDIC units are needed. High performance and low cost bit serial CORDIC implementations are presented. Finally, the FPGA resources and the performance of a hardware-based artificial neuron are presented.

  20. LoFASM's FPGA-based Digital Acquisition System

    NASA Astrophysics Data System (ADS)

    Dartez, Louis P.; Jenet, F.; Creighton, T. D.; Ford, A. J.; Hicks, B.; Hinojosa, J.; Kassim, N. E.; Price, R. H.; Stovall, K.; Ray, P. S.; Taylor, G. B.

    2014-01-01

    The Low Frequency All Sky Monitor (LoFASM) is a distributed array of dipole antennas that are sensitive to radio frequencies from 10 to 88 MHz. LoFASM consists of antennas and front end electronics that were originally developed for the Long Wavelength Array (LWA) by the U.S. Naval Research Lab, the University of New Mexico, Virginia Tech, and the Jet Propulsion Laboratory. LoFASM, funded by the U.S. Department of Defense, will initially consist of 4 stations, each consisting of 12 dual-polarization dipole antenna stands. The primary science goals of LoFASM will be the detection and study of low-frequency radio transients, a high priority science goal as deemed by the National Research Council's decadal survey. The data acquisition system for the LoFASM antenna array will be using Field Programmable Gate Array (FPGA) technology to implement a real time full Stokes spectrometer and data recorder. This poster presents an overview of the current design and digital architecture of a single station of the LoFASM array as well as the status of the entire project.