Science.gov

Sample records for fpga design framework

  1. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  2. FPGA Design Practices for I&C in Nuclear Power Plants

    SciTech Connect

    Bobrek, Miljko; Wood, Richard Thomas; Bouldin, Donald; Waterman, Michael E

    2009-01-01

    Safe FPGA design practices can be classified into three major groups covering board-level and FPGA logic-level design practices, FPGA design entry methods, and FPGA design methodology. This paper is presenting the most common hardware and software design practices that are acceptable in safety-critical FPGA systems. It also proposes an FPGA-specific design life cycle including design entry, FPGA synthesis, place and route, and validation and verification.

  3. OpenACC to FPGA: A Framework for Directive-based High-Performance Reconfigurable Computing

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2016-01-01

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, this design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.

  4. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    ERIC Educational Resources Information Center

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  5. FPGA design and implementation for EIT data acquisition.

    PubMed

    Yue, Xicai; McLeod, Chris

    2008-10-01

    OXBACT-5 was designed to meet the challenges involved in working in the intensive care hospital environment focussed particularly on thoracic imaging of patients with respiratory distress and chronic heart failure (CHF). The FPGA-based wireless LAN linked multi-channel EIT data acquisition system (DAS) providing 16 programmable excitation current channels and 64 voltage measurement channels is presented. It contains function modules of a PCI bus interface, direct digital synthesizers, dual-port memory blocks, digital demodulation and all the command and control logic in the FPGA. The whole EIT data acquisition system is fully programmable and reconfigurable from the host PC. The excitation frequency, excitation patterns, the measuring sequence and the gain of each measurement channel can be set from the host PC before each measurement. The demodulation is implemented in the FPGA chip to reduce the data rate between the DAS and the host PC. In addition, measurement process management is achieved in this FPGA chip. Complemented by analogue devices such as ADCs, DACs, analogue buffers and analogue multiplexers, the new FPGA-based EIT DAS system is implemented in a very compact way for bedside use in intensive care units of hospitals. It is intended for applications such as continuous respiration monitoring with data collection at 25 frames per second. Image reconstruction times depend on the choice of 2D or 3D imaging algorithms and the available processing power.

  6. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  7. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  8. REALIZATION OF A CUSTOM DESIGNED FPGA BASED EMBEDDED CONTROLLER.

    SciTech Connect

    SEVERINO,F.; HARVEY, M.; HAYES, T.; HOFF, L.; ODDO, P.; SMITH, K.S.

    2007-10-15

    As part of the Low Level RF (LLRF) upgrade project at Brookhaven National Laboratory's Collider-Accelerator Department (BNL C-AD), we have recently developed and tested a prototype high performance embedded controller. This controller is a custom designed PMC module employing a Xilinx V4FX60 FPGA with a PowerPC405 embedded processor, and a wide variety of on board peripherals (DDR2 SDRAM, FLASH, Ethernet, PCI, multi-gigabit serial transceivers, etc.). The controller is capable of running either an embedded version of LINUX or VxWorks, the standard operating system for RHIC front end computers (FECs). We have successfully demonstrated functionality of this controller as a standard RHIC FEC and tested all on board peripherals. We now have the ability to develop complex, custom digital controllers within the framework of the standard RHIC control system infrastructure. This paper will describe various aspects of this development effort, including the basic hardware, functional capabilities, the development environment, kernel and system integration, and plans for further development.

  9. Evaluation of power costs in applying TMR to FPGA designs.

    SciTech Connect

    Rollins, Nathaniel; Wirthlin, M. J.; Graham, P. S.

    2004-01-01

    Triple modular redundancy (TMR) is a technique commonly used to mitigate against design failures caused by single event upsets (SEUs). The SEU immunity that TMR provides comes at the cost of increased design area and decreased speed. Additionally, the cost of increased power due to TMR must be considered. This paper evaluates the power costs of TMR and validates the evaluations with actual measurements. Sensitivity to design placement is another important part of this study. Power consumption costs due to TMR are also evaluated in different FPGA architectures. This study shows that power consumption rises in the range of 3x to 7x when TMR is applied to a design.

  10. Design of transient light signal simulator based on FPGA

    NASA Astrophysics Data System (ADS)

    Kang, Jing; Chen, Rong-li; Wang, Hong

    2014-11-01

    A design scheme of transient light signal simulator based on Field Programmable gate Array (FPGA) was proposed in this paper. Based on the characteristics of transient light signals and measured feature points of optical intensity signals, a fitted curve was created in MATLAB. And then the wave data was stored in a programmed memory chip AT29C1024 by using SUPERPRO programmer. The control logic was realized inside one EP3C16 FPGA chip. Data readout, data stream cache and a constant current buck regulator for powering high-brightness LEDs were all controlled by FPGA. A 12-Bit multiplying CMOS digital-to-analog converter (DAC) DAC7545 and an amplifier OPA277 were used to convert digital signals to voltage signals. A voltage-controlled current source constituted by a NPN transistor and an operational amplifier controlled LED array diming to achieve simulation of transient light signal. LM3405A, 1A Constant Current Buck Regulator for Powering LEDs, was used to simulate strong background signal in space. Experimental results showed that the scheme as a transient light signal simulator can satisfy the requests of the design stably.

  11. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  12. Design of Viterbi Decoder Based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Xiumin; Zhang, Yang; Chen, Haowei

    The minimum bit width of the path metrics at the premise of not affecting the performance are calculated out, in order to reduce the storage resources cost in the design of convolutional code decoders. A simple method is proposed to judge the state nodes which the decoder can reach at each clock cycle during the setup process. Simulation platform to verify the proposed scheme has been set up with the matlab software, after that a decoder of (2,1,8) convolutional code with generating polynomial (561,753) is designed. Result of comparison with other designs shows that the scheme proposed greatly improves the throughput of the decoder at the cost of fewer resources.

  13. Design for Review - Applying Lessons Learned to Improve the FPGA Review Process

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Li, Kenneth E.

    2014-01-01

    Flight Field Programmable Gate Array (FPGA) designs are required to be independently reviewed. This paper provides recommendations to Flight FPGA designers to properly prepare their designs for review in order to facilitate the review process, and reduce the impact of the review time in the overall project schedule.

  14. Design of extensible meteorological data acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Liu, Yin-hua; Zhang, Hui-jun; Li, Xiao-hui

    2015-02-01

    In order to compensate the tropospheric refraction error generated in the process of satellite navigation and positioning. Temperature, humidity and air pressure had to be used in concerned models to calculate the value of this error. While FPGA XC6SLX16 was used as the core processor, the integrated silicon pressure sensor MPX4115A and digital temperature-humidity sensor SHT75 are used as the basic meteorological parameter detection devices. The core processer was used to control the real-time sampling of ADC AD7608 and to acquire the serial output data of SHT75. The data was stored in the BRAM of XC6SLX16 and used to generate standard meteorological parameters in NEMA format. The whole design was based on Altium hardware platform and ISE software platform. The system was described in the VHDL language and schematic diagram to realize the correct detection of temperature, humidity, air pressure. The 8-channel synchronous sampling characteristics of AD7608 and programmable external resources of FPGA laid the foundation for the increasing of analog or digital meteorological element signal. The designed meteorological data acquisition system featured low cost, high performance, multiple expansions.

  15. Hardware design to accelerate PNG encoder for binary mask compression on FPGA

    NASA Astrophysics Data System (ADS)

    Kachouri, Rostom; Akil, Mohamed

    2015-02-01

    PNG (Portable Network Graphics) is a lossless compression method for real-world pictures. Since its specification, it continues to attract the interest of the image processing community. Indeed, PNG is an extensible file format for portable and well-compressed storage of raster images. In addition, it supports all of Black and White (binary mask), grayscale, indexed-color, and truecolor images. Within the framework of the Demat+ project which intend to propose a complete solution for storage and retrieval of scanned documents, we address in this paper a hardware design to accelerate the PNG encoder for binary mask compression on FPGA. For this, an optimized architecture is proposed as part of an hybrid software and hardware co-operating system. For its evaluation, the new designed PNG IP has been implemented on the ALTERA Arria II GX EP2AGX125EF35" FPGA. The experimental results show a good match between the achieved compression ratio, the computational cost and the used hardware resources.

  16. Single Event Analysis and Fault Injection Techniques Targeting Complex Designs Implemented in Xilinx-Virtex Family Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth; Kim, Kim

    2014-01-01

    An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.

  17. Design and FPGA implementation of VLAN in EPON

    NASA Astrophysics Data System (ADS)

    Liu, Minglai; Lin, Rujian; Huang, Jun

    2005-02-01

    As a promising solution for next-generation broadband access networks, EPON could provide full-service access such as voice, video and data applications. However, EPON"s standard IEEE 802.3ah does not specify a particular supporting mechanism to guarantee QoS and priority requirements of various services, allowing it to be vendor specific. Meanwhile, how to segregate user traffic to guarantee security, remains unsolved. This paper creatively introduced the 802.1Q VLAN (Virtual Local Area Network) technique into the EPON system to solve these problems. Firstly, a brief introduction of EPON system is given. Secondly, the VLAN solution is presented in detail. Unlike VLAN mapping according to port or MAC in Gigabit Ethernet, EPON"s VLAN mapping is based on LLID tag. At last, OLT MAC layer design is given and FPGA implementation is described in detail. Detailed simulation experiments have been conducted to study the performance and validate the effectiveness of the proposed mechanism.

  18. FPGA-Based Efficient Hardware/Software Co-Design for Industrial Systems with Consideration of Output Selection

    NASA Astrophysics Data System (ADS)

    Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.

    2016-05-01

    This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.

  19. SEMICONDUCTOR INTEGRATED CIRCUITS: Design for an IO block array in a tile-based FPGA

    NASA Astrophysics Data System (ADS)

    Guangxin, Ding; Lingdou, Chen; Zhongli, Liu

    2009-08-01

    A design for an IO block array in a tile-based FPGA is presented. Corresponding with the characteristics of the FPGA, each IO cell is composed of a signal path, local routing pool and configurable input/output buffers. Shared programmable registers in the signal path can be configured for the function of JTAG, without specific boundary scan registers/latches, saving layout area. The local routing pool increases the flexibility of routing and the routability of the whole FPGA. An auxiliary power supply is adopted to increase the performance of the IO buffers at different configured IO standards. The organization of the IO block array is described in an architecture description file, from which the array layout can be accomplished through use of an automated layout assembly tool. This design strategy facilitates the design of FPGAs with different capacities or architectures in an FPGA family series. The bond-out schemes of the same FPGA chip in different packages are also considered. The layout is based on SMIC 0.13 μm logic 1P8M salicide 1.2/2.5 V CMOS technology. Our performance is comparable with commercial SRAM-based FPGAs which use a similar process.

  20. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  1. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications.

    PubMed

    Sun, Li; Savory, Joshua J; Warncke, Kurt

    2013-08-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range.

  2. Design and implementation of an FPGA-based timing pulse programmer for pulsed-electron paramagnetic resonance applications

    PubMed Central

    Sun, Li; Savory, Joshua J.; Warncke, Kurt

    2014-01-01

    The design, construction and implementation of a field-programmable gate array (FPGA) -based pulse programmer for pulsed-electron paramagnetic resonance (EPR) experiments is described. The FPGA pulse programmer offers advantages in design flexibility and cost over previous pulse programmers, that are based on commercial digital delay generators, logic pattern generators, and application-specific integrated circuit (ASIC) designs. The FPGA pulse progammer features a novel transition-based algorithm and command protocol, that is optimized for the timing structure required for most pulsed magnetic resonance experiments. The algorithm was implemented by using a Spartan-6 FPGA (Xilinx), which provides an easily accessible and cost effective solution for FPGA interfacing. An auxiliary board was designed for the FPGA-instrument interface, which buffers the FPGA outputs for increased power consumption and capacitive load requirements. Device specifications include: Nanosecond pulse formation (transition edge rise/fall times, ≤3 ns), low jitter (≤150 ps), large number of channels (16 implemented; 48 available), and long pulse duration (no limit). The hardware and software for the device were designed for facile reconfiguration to match user experimental requirements and constraints. Operation of the device is demonstrated and benchmarked by applications to 1-D electron spin echo envelope modulation (ESEEM) and 2-D hyperfine sublevel correlation (HYSCORE) experiments. The FPGA approach is transferrable to applications in nuclear magnetic resonance (NMR; magnetic resonance imaging, MRI), and to pulse perturbation and detection bandwidths in spectroscopies up through the optical range. PMID:25076864

  3. DESIGN AND ANALYSIS OF AN FPGA-BASED ACTIVE FEEDBACK DAMPING SYSTEM

    SciTech Connect

    Xie, Zaipeng; Schulte, Mike; Deibele, Craig Edmond

    2010-01-01

    The Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory is a high-intensity proton-based accelerator that produces neutron beams for neutronscattering research. As the most powerful pulsed neutron source in the world, the SNS accelerator has experienced an unprecedented beam instability that has a wide bandwidth (0 to 300MHz) and fast growth time (10 to100 s). In this paper, we propose and analyze several FPGA-based designs for an active feedback damping system. This signal processing system is the first FPGA-based design for active feedback damping of wideband instabilities in high intensity accelerators. It can effectively mitigate instabilities in highintensity protons beams, reduce radiation, and boost the accelerator s luminosity performance. Unlike existing systems, which are designed using analog components, our FPGA-based active feedback damping system offers programmability while maintaining high performance. To meet the system throughput and latency requirements, our proposed designs are guided by detailed analysis of resource and performance tradeoffs. These designs are mapped onto a reconfigurable platform that includes Xilinx Virtex-II Pro FPGAs and high-speed analog-to-digital and digital-toanalog converters. Our results show that our FPGA-based active feedback damping system can provide increased flexibility and improved signal processing performance that are not feasible with existing analog systems.

  4. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S; Caffrey, Michael P

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.

  5. Evaluation of a segmentation algorithm designed for an FPGA implementation

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Schönermark, Maria; Huber, Felix

    2013-10-01

    The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.

  6. The Design of a FPGA-Based Traffic Light Control System: From Theory to Implementation

    NASA Astrophysics Data System (ADS)

    Rodríguez-Osorio, Ramón Martínez; Otero, Miguel Á. Fernández; Ramón, Miguel Calvo; Navarrete, Luis Cuéllar; Ariet, Leandro De Haro

    Most software-defined radio (SDR) prototypes make use of FPGA (Field Programmable Gate Array) devices such as digital filtering that perform operations at high sampling rates. The process from specifications and design to the implementation in FPGA requires the use of a large number of simulation tools. In the first stages of the design, the use of high-level tools such as Matlab, are required to perform intensive simulations. Results will help us to select the best specifications. Once the main design parameters have been established, the overall design is divided into modules following a hierarchical scheme. Each module is defined using a hardware description language (HDL) such as VHDL or Verilog.

  7. Effectiveness of Internal vs. External SEU Scrubbing Mitigation Strategies in a Xilinx FPGA: Design, Test, and Analysis

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Poivey C.; Petrick, D.; Espinosa, D.; Lesea, Austin; LaBel, K. A.; Friendlich, M; Kim, H; Phan, A.

    2008-01-01

    We compare two scrubbing mitigation schemes for Xilinx FPGA devices. The design of the scrubbers is briefly discussed along with an examination of mitigation limitations. Proton and Heavy Ion data are then presented and analyzed.

  8. [Design of an FPGA-based image guided surgery hardware platform].

    PubMed

    Zou, Fa-Dong; Qin, Bin-Jie

    2008-07-01

    An FPGA-Based Image Guided Surgery Hardware Platform has been designed and implemented in this paper. The hardware platform can provide hardware acceleration for image guided surgery. It is completed with a video decoder interface, a DDR memory controller, a 12C bus controller, an interrupt controller and so on. It is able to perform real time video endoscopy image capturing in the surgery and to preserve the hardware interface for image guided surgery algorithm module. PMID:18973036

  9. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  10. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  11. Design and implementation of a programming circuit in radiation-hardened FPGA

    NASA Astrophysics Data System (ADS)

    Lihua, Wu; Xiaowei, Han; Yan, Zhao; Zhongli, Liu; Fang, Yu; Chen, Stanley L.

    2011-08-01

    We present a novel programming circuit used in our radiation-hardened field programmable gate array (FPGA) chip. This circuit provides the ability to write user-defined configuration data into an FPGA and then read it back. The proposed circuit adopts the direct-access programming point scheme instead of the typical long token shift register chain. It not only saves area but also provides more flexible configuration operations. By configuring the proposed partial configuration control register, our smallest configuration section can be conveniently configured as a single data and a flexible partial configuration can be easily implemented. The hierarchical simulation scheme, optimization of the critical path and the elaborate layout plan make this circuit work well. Also, the radiation hardened by design programming point is introduced. This circuit has been implemented in a static random access memory (SRAM)-based FPGA fabricated by a 0.5 μm partial-depletion silicon-on-insulator CMOS process. The function test results of the fabricated chip indicate that this programming circuit successfully realizes the desired functions in the configuration and read-back. Moreover, the radiation test results indicate that the programming circuit has total dose tolerance of 1 × 105 rad(Si), dose rate survivability of 1.5 × 1011 rad(Si)/s and neutron fluence immunity of 1 × 1014 n/cm2.

  12. Fault Tolerance Implementation within SRAM Based FPGA Designs based upon Single Event Upset Occurrence Rates

    NASA Technical Reports Server (NTRS)

    Berg, Melanie

    2006-01-01

    Emerging technology is enabling the design community to consistently expand the amount of functionality that can be implemented within Integrated Circuits (ICs). As the number of gates placed within an FPGA increases, the complexity of the design can grow exponentially. Consequently, the ability to create reliable circuits has become an incredibly difficult task. In order to ease the complexity of design completion, the commercial design community has developed a very rigid (but effective) design methodology based on synchronous circuit techniques. In order to create faster, smaller and lower power circuits, transistor geometries and core voltages have decreased. In environments that contain ionizing energy, such a combination will increase the probability of Single Event Upsets (SEUs) and will consequently affect the state space of a circuit. In order to combat the effects of radiation, the aerospace community has developed several "Hardened by Design" (fault tolerant) design schemes. This paper will address design mitigation schemes targeted for SRAM Based FPGA CMOS devices. Because some mitigation schemes may be over zealous (too much power, area, complexity, etc.. . .), the designer should be conscious that system requirements can ease the amount of mitigation necessary for acceptable operation. Therefore, various degrees of Fault Tolerance will be demonstrated along with an analysis of its effectiveness.

  13. FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations

    SciTech Connect

    Harold D. Anderson, John T. Williams

    2009-07-01

    -of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.

  14. [Ultrasonic dynamic focus data reordering design based on FPGA].

    PubMed

    Zhao, Chengxiao; Xiang, Siping

    2014-05-01

    The existing analog reordering and folding technology has the following problems: it cause the attenuation of the ultrasonic signal, and it is difficult to achieve beam steering in color Doppler ultrasonic diagnostic instrument. This article proposes a design method to achieve digital reordering of dynamic focusing data. The digital reordering is composed of two parts, bit reordering which is implemented with multiplier and byte reordering using switch selection. The results show that it can meet the design requirements using fewer resources.

  15. FPGA-based data processing module design of on-board radiometric calibration in visible/near infrared bands

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Li, Chenyang; Yue, Tao; Liu, Na; Jiang, Linjun; Sun, Yue; Li, Mingyan

    2015-12-01

    FPGA technology has long been applied to on-board radiometric calibration data processing however the integration of FPGA program is not good enough. For example, some sensors compressed remote sensing images and transferred to ground station to calculate the calibration coefficients. It will affect the timeliness of on-board radiometric calibration. This paper designs an integrated flow chart of on-board radiometric calibration. Building FPGA-based radiometric calibration data processing modules uses system generator. Thesis focuses on analyzing the calculation accuracy of FPGA-based two-point method and verifies the feasibility of this method. Calibration data was acquired by hardware platform which was built using integrating sphere, CMOS camera (canon 60d), ASD spectrometers and light filter (center wavelength: 690nm, bandwidth: 45nm). The platform can simulate single-band on-board radiometric calibration data acquisition in visible/near infrared band. Making an experiment of calibration coefficients calculation uses obtained data and FPGA modules. Experimental results show that: the camera linearity is above 99% meeting the experimental requirement. Compares with MATLAB the calculation accuracy of two-point method by FPGA are as follows: the error of gain value is 0.0053%; the error of offset value is 0.00038719%. Those results meet experimental accuracy requirement.

  16. Fpga based L-band pulse doppler radar design and implementation

    NASA Astrophysics Data System (ADS)

    Savci, Kubilay

    As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed

  17. Statechart-based design controllers for FPGA partial reconfiguration

    NASA Astrophysics Data System (ADS)

    Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo

    2015-09-01

    Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.

  18. FPGA Coprocessor Design for an Onboard Multi-Angle Spectro-Polarimetric Imager

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.

    2010-01-01

    A multi-angle spectro-polarimetric imager (MSPI) is an advanced camera system currently under development at JPL for possible future consideration on a satellite-based Aerosol-Cloud-Environ - ment (ACE) interaction study. The light in the optical system is subjected to a complex modulation designed to make the overall system robust against many instrumental artifacts that have plagued such measurements in the past. This scheme involves two photoelastic modulators that are beating in a carefully selected pattern against each other. In order to properly sample this modulation pattern, each of the proposed nine cameras in the system needs to read out its imager array about 1,000 times per second. The onboard processing required to compress this data involves least-squares fits (LSFs) of Bessel functions to data from every pixel in realtime, thus requiring an onboard computing system with advanced data processing capabilities in excess of those commonly available for space flight. As a potential solution to meet the MSPI onboard processing requirements, an LSF algorithm was developed on the Xilinx Virtex-4FX60 field programmable gate array (FPGA). In addition to configurable hardware capability, this FPGA includes Power -PC405 microprocessors, which together enable a combination hardware/ software processing system. A laboratory demonstration was carried out based on a hardware/ software co-designed processing architecture that includes hardware-based data collection and least-squares fitting (computationally), and softwarebased transcendental function computation (algorithmically complex) on the FPGA. Initial results showed that these calculations can be handled using a combination of the Virtex- 4TM Power-PC core and the hardware fabric.

  19. Design of a system based on DSP and FPGA for video recording and replaying

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  20. Reliability concerns with logical constants in Xilinx FPGA designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul; Morgan, Keith; Ostler, Patrick; Allen, Greg; Swift, Gary; Tseng, Chen W

    2009-01-01

    In Xilinx Field Programmable Gate Arrays logical constants, which ground unused inputs and provide constants for designs, are implemented in SEU-susceptible logic. In the past, these logical constants have been shown to cause the user circuit to output bad data and were not resetable through off-line rcconfiguration. In the more recent devices, logical constants are less problematic, though mitigation should still be considered for high reliability applications. In conclusion, we have presented a number of reliability concerns with logical constants in the Xilinx Virtex family. There are two main categories of logical constants: implicit and explicit logical constants. In all of the Virtex devices, the implicit logical constants are implemented using half latches, which in the most recent devices are several orders of magnitudes smaller than configuration bit cells. Explicit logical constants are implemented exclusively using constant LUTs in the Virtex-I and Virtex-II, and use a combination of constant LUTs and architectural posts to the ground plane in the Virtex-4. We have also presented mitigation methods and options for these devices. While SEUs in implicit and some types of explicit logical constants can cause data corrupt, the chance of failure from these components is now much smaller than it was in the Virtex-I device. Therefore, for many cases, mitigation might not be necessary, except under extremely high reliability situations.

  1. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  2. Design and quantitative analysis of parametrisable eFPGA-architectures for arithmetic

    NASA Astrophysics Data System (ADS)

    Neumann, B.; von Sydow, T.; Blume, H.; Noll, T. G.

    2006-09-01

    Future SoCs will feature embedded FPGAs (eFPGAs) to enable flexible and efficient implementations of high-throughput digital signal processing applications. Current research projects on and emerging products containing FPGAs are mainly based on "standard FPGA"-architectures that are optimised for a very wide range of applications. The implementation costs of these FPGAs are dominated by a very complex interconnect network. This paper presents a method to improve the efficiency of eFPGAs by tailoring them for a certain application domain using a parametrisable architecture template derived from the results of a systematic evaluation of the requirements of the application domain. Two different architectures are discussed, a reference architecture to illustrate the methodology and possible optimisation measures as well as a specialised arithmetic-oriented eFPGA for applications like correlators, decoders, and filters. For the arithmetic-oriented architecture, a novel logic element (LE) and a special interconnect architecture that was designed with respect to the connectivity characteristics of regular datapaths, are presented. For both architecture templates, physically optimised implementations based on an automatic design approach have been created. As a first cost comparison of these implementations with standard FPGAs, the LE-density (number of logic elements per mm2) is evaluated. For the arithmetic-oriented architecture, the LE-density could be increased by an order of magnitude compared to standard architectures.

  3. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Anjun; Wang, Jing; Lu, Jian-Yu

    2002-05-01

    Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  4. Dynamic high-speed acquisition system design of transmission error with USB based on LabVIEW and FPGA

    NASA Astrophysics Data System (ADS)

    Zheng, Yong; Chen, Yan

    2013-10-01

    To realize the design of dynamic acquisition system for real-time detection of transmission chain error is very important to improve the machining accuracy of machine tool. In this paper, the USB controller and FPGA is used for hardware platform design, combined with LabVIEW to design user applications, NI-VISA is taken for develop USB drivers, and ultimately achieve the dynamic acquisition system design of transmission error

  5. Design and realization of data acquisition system of FTS based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Haiying; Li, Yue

    2014-11-01

    Earth observation is an important field of infrared remote sensing. Hyper-spectral remote sensing play an important role in weather forecast, environmental protection, agricultural production and geological survey. Now, Fourier-transform spectrometer (FTS) based on theory of Michelson interferometer has successfully been used to view the earth as a satellite-based instrument. The technology of FTS is an important research direction. This paper state the application of the FTS and give the analysis and research on interference signal sample and acquisition, in addition, it give a solution in which FPGA is used to complete the parallel capture of signal. In a conclusion, this design can accomplish the multi-channel and high-speed interferometer signal acquisition and transmission which is a base for further spectrum inversion and application.

  6. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    PubMed Central

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  7. Design Activity Framework for Visualization Design.

    PubMed

    McKenna, Sean; Mazur, Dominika; Agutter, James; Meyer, Miriah

    2014-12-01

    An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research. PMID:26356933

  8. Design Activity Framework for Visualization Design.

    PubMed

    McKenna, Sean; Mazur, Dominika; Agutter, James; Meyer, Miriah

    2014-12-01

    An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research.

  9. Design and implementation of low power clock gated 64-bit ALU on ultra scale FPGA

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep

    2016-03-01

    64-bit energy efficient Arithmetic and Logic Unit using negative latch based clock gating technique is designed in this paper. The 64-bit ALU is designed using multiplexer based full adder cell. We have designed a 64-bit ALU with a gated clock. We have used negative latch based circuit for generating gated clock. This gated clock is used to control the multiplexer based 64-bit ALU. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. We have achieved 74.07%, 92. 93% and 95.53% reduction in total clock power, 89.73%, 91.35% and 92.85% reduction in I/Os power, 67.14%, 62.84% and 74.34% reduction in dynamic power and 25.47%, 29.05% and 46.13% reduction in total supply power at 20 MHz, 200 MHz and 2 GHz frequency respectively. The power has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.3.

  10. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  11. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  12. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  13. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  14. Design and realization of the real-time spectrograph controller for LAMOST based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Wu, Liyan; Zeng, Yizhong; Dai, Songxin; Hu, Zhongwen; Zhu, Yongtian; Wang, Lei; Wu, Zhen; Chen, Yi

    2008-08-01

    A large Schmitt reflector telescope, Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST), is being built in China, which has effective aperture of 4 meters and can observe the spectra of as many as 4000 objects simultaneously. To fit such a large amount of observational objects, the dispersion part is composed of a set of 16 multipurpose fiber-fed double-beam Schmidt spectrographs, of which each has about ten of moveable components realtimely accommodated and manipulated by a controller. An industrial Ethernet network connects those 16 spectrograph controllers. The light from stars is fed to the entrance slits of the spectrographs with optical fibers. In this paper, we mainly introduce the design and realization of our real-time controller for the spectrograph, our design using the technique of System On Programmable Chip (SOPC) based on Field Programmable Gate Array (FPGA) and then realizing the control of the spectrographs through NIOSII Soft Core Embedded Processor. We seal the stepper motor controller as intellectual property (IP) cores and reuse it, greatly simplifying the design process and then shortening the development time. Under the embedded operating system μC/OS-II, a multi-tasks control program has been well written to realize the real-time control of the moveable parts of the spectrographs. At present, a number of such controllers have been applied in the spectrograph of LAMOST.

  15. GBT link testing and performance measurement on PCIe40 and AMC40 custom design FPGA boards

    NASA Astrophysics Data System (ADS)

    Mitra, Jubin; Khan, Shuaib A.; Barros Marin, Manoel; Cachemiche, Jean-Pierre; David, Erno; Hachon, Frédéric; Rethore, Frédéric; Kiss, Tivadar; Baron, Sophie; Kluge, Alex; Nayak, Tapan K.

    2016-03-01

    The high-energy physics experiments at the CERN's Large Hadron Collider (LHC) are preparing for Run3, which is foreseen to start in the year 2021. Data from the high radiation environment of the detector front-end electronics are transported to the data processing units, located in low radiation zones through GBT (Gigabit transceiver) links. The present work discusses the GBT link performance study carried out on custom FPGA boards, clock calibration logic and its implementation in new Arria 10 FPGA.

  16. Design Architecture and Initial Results from an FPGA Based Digital Receiver for Multistatic Meteor Measurements

    NASA Astrophysics Data System (ADS)

    Palo, Scott; Vaudrin, Cody

    Defined by a minimal RF front-end followed by an analog-to-digital converter (ADC) and con-trolled by a reconfigurable logic device (FPGA), the digital receiver will replace conventional heterodyning analog receivers currently in use by the COBRA meteor radar. A basic hardware overview touches on the major digital receiver components, theory of operation and data han-dling strategies. We address concerns within the community regarding the implementation of digital receivers in small-scale scientific radars, and outline the numerous benefits with a focus on reconfigurability. From a remote sensing viewpoint, having complete visibility into a band of the EM spectrum allows an experiment designer to focus on parameter estimation rather than hardware limitations. Finally, we show some basic multistatic receiver configurations enabled through GPS time synchronization. Currently, the digital receiver is configured to facilitate range and radial velocity determination of meteors in the MLT region for use with the COBRA meteor radar. Initial measurements from data acquired at Platteville, Colorado and Tierra Del Fuego in Argentina will be presented. We show an improvement in detection rates compared to conventional analog systems. Scientific justification for a digital receiver is clearly made by the presentation of RTI plots created using data acquired from the receiver. These plots reveal an interesting phenomenon concerning vacillating power structures in a select number of meteor trails.

  17. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  18. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  19. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  20. ELPSA as a Lesson Design Framework

    ERIC Educational Resources Information Center

    Lowrie, Tom; Patahuddin, Sitti Maesuri

    2015-01-01

    This paper offers a framework for a mathematics lesson design that is consistent with the way we learn about, and discover, most things in life. In addition, the framework provides a structure for identifying how mathematical concepts and understanding are acquired and developed. This framework is called ELPSA and represents five learning…

  1. FPGA design for dual-spectrum Visual Scene Preparation in retinal prosthesis.

    PubMed

    Al Yaman, Musa; Al-Atabany, Walid; Bystrov, Alex; Degenaar, Patrick

    2014-01-01

    A method of Visual Scene Preparation for the patients suffering Retinitis Pigmentosa is implemented in hardware for the first time. The scene is captured with two cameras, one visible spectrum and one infra-red, in order to distinguish between the live and non-live objects. The live objects are subsequently emphasized in the output image, thus helping a patient to see the most significant detail with the healthy part of the retina. The implementation uses Verilog language and FPGA platform. A system prototype is analyzed and compared to MATLAB results.

  2. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  3. Design of an Oximeter Based on LED-LED Configuration and FPGA Technology

    PubMed Central

    Stojanovic, Radovan; Karadaglic, Dejan

    2013-01-01

    A fully digital photoplethysmographic (PPG) sensor and actuator has been developed. The sensing circuit uses one Light Emitting Diode (LED) for emitting light into human tissue and one LED for detecting the reflectance light from human tissue. A Field Programmable Gate Array (FPGA) is used to control the LEDs and determine the PPG and Blood Oxygen Saturation (SpO2). The configurations with two LEDs and four LEDs are developed for measuring PPG signal and Blood Oxygen Saturation (SpO2). N-LEDs configuration is proposed for multichannel SpO2 measurements. The approach resulted in better spectral sensitivity, increased and adjustable resolution, reduced noise, small size, low cost and low power consumption. PMID:23291575

  4. FPGA design and implementation of a fast pixel purity index algorithm for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.

    2005-11-01

    Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.

  5. FPGA Vision Data Architecture

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  6. A visualization framework for design and evaluation

    NASA Astrophysics Data System (ADS)

    Blundell, Benjamin J.; Ng, Gary; Pettifer, Steve

    2006-01-01

    The creation of compelling visualisation paradigms is a craft often dominated by intuition and issues of aesthetics, with relatively few models to support good design. The majority of problem cases are approached by simply applying a previously evaluated visualisation technique. A large body of work exists covering the individual aspects of visualisation design such as the human cognition aspects visualisation methods for specific problem areas, psychology studies and so forth, yet most frameworks regarding visualisation are applied after-the-fact as an evaluation measure. We present an extensible framework for visualisation aimed at structuring the design process, increasing decision traceability and delineating the notions of function, aesthetics and usability. The framework can be used to derive a set of requirements for good visualisation design and evaluating existing visualisations, presenting possible improvements. Our framework achieves this by being both broad and general, built on top of existing works, with hooks for extensions and customizations. This paper shows how existing theories of information visualisation fit into the scheme, presents our experience in the application of this framework on several designs, and offers our evaluation of the framework and the designs studied.

  7. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  8. An FPGA-based design of a modular approach for integral images in a real-time face detection system

    NASA Astrophysics Data System (ADS)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2009-05-01

    The first step in a facial recognition system is to find and extract human faces in a static image or video frame. Most face detection methods are based on statistical models that can be trained and then used to classify faces. These methods are effective but the main drawback is speed because a massive number of sub-windows at different image scales are considered in the detection procedure. A robust face detection technique based on an encoded image known as an "integral image" has been proposed by Viola and Jones. The use of an integral image helps to reduce the number of operations to access a sub-image to a relatively small and fixed number. Additional speedup is achieved by incorporating a cascade of simple classifiers to quickly eliminate non-face sub-windows. Even with the reduced number of accesses to image data to extract features in Viola-Jones algorithm, the number of memory accesses is still too high to support realtime operations for high resolution images or video frames. The proposed hardware design in this research work employs a modular approach to represent the "integral image" for this memory-intensive application. An efficient memory manage strategy is also proposed to aggressively utilize embedded memory modules to reduce interaction with external memory chips. The proposed design is targeted for a low-cost FPGA prototype board for a cost-effective face detection/recognition system.

  9. FPGA-based design and implementation of arterial pulse wave generator using piecewise Gaussian-cosine fitting.

    PubMed

    Wang, Lu; Xu, Lisheng; Zhao, Dazhe; Yao, Yang; Song, Dan

    2015-04-01

    Because arterial pulse waves contain vital information related to the condition of the cardiovascular system, considerable attention has been devoted to the study of pulse waves in recent years. Accurate acquisition is essential to investigate arterial pulse waves. However, at the stage of developing equipment for acquiring and analyzing arterial pulse waves, specific pulse signals may be unavailable for debugging and evaluating the system under development. To produce test signals that reflect specific physiological conditions, in this paper, an arterial pulse wave generator has been designed and implemented using a field programmable gate array (FPGA), which can produce the desired pulse waves according to the feature points set by users. To reconstruct a periodic pulse wave from the given feature points, a method known as piecewise Gaussian-cosine fitting is also proposed in this paper. Using a test database that contains four types of typical pulse waves with each type containing 25 pulse wave signals, the maximum residual error of each sampling point of the fitted pulse wave in comparison with the real pulse wave is within 8%. In addition, the function for adding baseline drift and three types of noises is integrated into the developed system because the baseline occasionally wanders, and noise needs to be added for testing the performance of the designed circuits and the analysis algorithms. The proposed arterial pulse wave generator can be considered as a special signal generator with a simple structure, low cost and compact size, which can also provide flexible solutions for many other related research purposes.

  10. Validation of an FPGA fault simulator.

    SciTech Connect

    Wirthlin, M. J.; Johnson, D. E.; Graham, P. S.; Caffrey, M. P.

    2003-01-01

    This work describes the radiation testing of a fault simulation tool used to study the behavior of FPGA circuits in the presence of configuration memory upsets . There is increasing interest in the use of Field Programmable Gate Arrays (FPGAs) in space-based applications such as remote sensing[1] . The use of reconfigurable Field Programmable Gate Arrays (FPGAs) within a spacecraft allows the use of digital circuits that are both application-specific and reprogrammable. Unlike application-specific integrated circuits (ASICs), FPGAs can be configured after the spacecraft has been launched . This flexibility allows the same FPGA resources to be used for multiple instruments, missions, or changing spacecraft objectives . Errors in an FPGA design can be resolved by fixing the incorrect design and reconfiguring the FPGA with an updated configuration bitstream . Further, custom circuit designs can be created to avoid FPGA resources that have failed during the course of the spacecraft mission .

  11. Structural Analysis in a Conceptual Design Framework

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.

    2012-01-01

    Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.

  12. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  13. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    NASA Astrophysics Data System (ADS)

    Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.

    2013-02-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  14. FPGA implemented testbed in 8-by-8 and 2-by-2 OFDM-MIMO channel estimation and design of baseband transceiver.

    PubMed

    Ramesh, S; Seshasayanan, R

    2016-01-01

    In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks. PMID:27047719

  15. FPGA implemented testbed in 8-by-8 and 2-by-2 OFDM-MIMO channel estimation and design of baseband transceiver.

    PubMed

    Ramesh, S; Seshasayanan, R

    2016-01-01

    In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks.

  16. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    SciTech Connect

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  17. Research on the design of surface acquisition system of active lap based on FPGA and FX2LP

    NASA Astrophysics Data System (ADS)

    Zhao, Hongshen; Li, Xiaojin; Fan, Bin; Zeng, Zhige

    2014-08-01

    In order to research the dynamic surface shape changes of active lap during the processing, this paper introduces a dynamic surface shape acquisition system of active lap using FPGA and USB communication. This system consists of high-precision micro-displacement sensor array, acquisition board, PC computer composition, and acquisition circuit board includes six sub-boards based on FPGA, a hub-board based on FPGA and USB communication. A sub-board is responsible for a number of independent channel sensors' data acquisition; hub-board is responsible for creating encoder simulation tools to active lap deformation control system with location information, sending synchronization information to latch the sensor data in all of the sub-boards for a time, while addressing the sub-boards to gather the sensor data in each sub-board one by one and transmitting all the sensor data together with location information via the USB chip FX2LP to the host computer. Experimental results show that the system is capable of fixing the location and speed of active lap, meanwhile the control of surface transforming and dynamic surface data acquisition at a certain location in the processing is implemented.

  18. Designing Educational Software with Students through Collaborative Design Games: The We!Design&Play Framework

    ERIC Educational Resources Information Center

    Triantafyllakos, George; Palaigeorgiou, George; Tsoukalas, Ioannis A.

    2011-01-01

    In this paper, we present a framework for the development of collaborative design games that can be employed in participatory design sessions with students for the design of educational applications. The framework is inspired by idea generation theory and the design games literature, and guides the development of board games which, through the use…

  19. An OER Architecture Framework: Needs and Design

    ERIC Educational Resources Information Center

    Khanna, Pankaj; Basak, P. C.

    2013-01-01

    This paper describes an open educational resources (OER) architecture framework that would bring significant improvements in a well-structured and systematic way to the educational practices of distance education institutions of India. The OER architecture framework is articulated with six dimensions: pedagogical, technological, managerial,…

  20. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Konefat, Edward H.; Gromovt, Konstantin

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of an FPGA used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and store data from multiple sensors at multiple rates during launch, ascent, deployment and descent phases of the subsonic parachute test.

  1. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Gromov, Konstantin G.; Konefat, Edward H.

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of a Field Programmable Gate Array (FPGA) used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and control storage of telemetry data from multiple sensors throughout launch, ascent, deployment and descent phases of the subsonic parachute test.

  2. Radiation Tolerant Antifuse FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian; McCollum, John; Parker, Wanida; Katz, Rich; Kleyner, Igor; Day, John H. (Technical Monitor)

    2002-01-01

    The total dose performance of the antifuse FPGA for space applications is summarized. Optimization of the radiation tolerance in the fabless model is the main theme. Mechanisms to explain the variation in different products are discussed.

  3. Public Key FPGA Software

    SciTech Connect

    Hymel, Ross

    2013-07-25

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  4. Building a Framework for Engineering Design Experiences in High School

    ERIC Educational Resources Information Center

    Denson, Cameron D.; Lammi, Matthew

    2014-01-01

    In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…

  5. A Design Framework for Online Teacher Professional Development Communities

    ERIC Educational Resources Information Center

    Liu, Katrina Yan

    2012-01-01

    This paper provides a design framework for building online teacher professional development communities for preservice and inservice teachers. The framework is based on a comprehensive literature review on the latest technology and epistemology of online community and teacher professional development, comprising four major design factors and three…

  6. Optoelectronic date acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liu, Chunyang; Song, De; Tong, Zhiguo; Liu, Xiangqing

    2015-11-01

    An optoelectronic date acquisition system is designed based on FPGA. FPGA chip that is EP1C3T144C8 of Cyclone devices from Altera corporation is used as the centre of logic control, XTP2046 chip is used as A/D converter, host computer that communicates with the date acquisition system through RS-232 serial communication interface are used as display device and photo resistance is used as photo sensor. We use Verilog HDL to write logic control code about FPGA. It is proved that timing sequence is correct through the simulation of ModelSim. Test results indicate that this system meets the design requirement, has fast response and stable operation by actual hardware circuit test.

  7. Design and implementation of a multiband digital filter using FPGA to extract the ECG signal in the presence of different interference signals.

    PubMed

    Aboutabikh, Kamal; Aboukerdah, Nader

    2015-07-01

    In this paper, we propose a practical way to synthesize and filter an ECG signal in the presence of four types of interference signals: (1) those arising from power networks with a fundamental frequency of 50Hz, (2) those arising from respiration, having a frequency range from 0.05 to 0.5Hz, (3) muscle signals with a frequency of 25Hz, and (4) white noise present within the ECG signal band. This was done by implementing a multiband digital filter (seven bands) of type FIR Multiband Least Squares using a digital programmable device (Cyclone II EP2C70F896C6 FPGA, Altera), which was placed on an education and development board (DE2-70, Terasic). This filter was designed using the VHDL language in the Quartus II 9.1 design environment. The proposed method depends on Direct Digital Frequency Synthesizers (DDFS) designed to synthesize the ECG signal and various interference signals. So that the synthetic ECG specifications would be closer to actual ECG signals after filtering, we designed in a single multiband digital filter instead of using three separate digital filters LPF, HPF, BSF. Thus all interference signals were removed with a single digital filter. The multiband digital filter results were studied using a digital oscilloscope to characterize input and output signals in the presence of differing sinusoidal interference signals and white noise.

  8. Interior Design Education within a Human Ecological Framework

    ERIC Educational Resources Information Center

    Kaup, Migette L.; Anderson, Barbara G.; Honey, Peggy

    2007-01-01

    An education based in human ecology can greatly benefit interior designers as they work to understand and improve the human condition. Design programs housed in colleges focusing on human ecology can improve the interior design profession by taking advantage of their home base and emphasizing the human ecological framework in the design curricula.…

  9. An Integrated Framework for CBI Screen Design and Layout.

    ERIC Educational Resources Information Center

    Hannafin, Michael J.; Hooper, Simon

    1989-01-01

    Discusses the importance of screen design in computer-based instruction (CBI), and presents a framework for screen design decisions based on the ROPES (Retrieving, Orienting, Presenting, Encoding, and Sequencing) meta-model for instructional design. Psychological, instructional, and technological foundations of screen design are discussed, and…

  10. ADC and TDC implemented using FPGA

    SciTech Connect

    Wu, Jinyuan; Hansen, Sten; Shi, Zonghan; /Fermilab

    2007-11-01

    Several tests of FPGA devices programmed as analog waveform digitizers are discussed. The ADC uses the ramping-comparing scheme. A multi-channel ADC can be implemented with only a few resistors and capacitors as external components. A periodic logic levels are shaped by passive RC network to generate exponential ramps. The FPGA differential input buffers are used as comparators to compare the ramps with the input signals. The times at which these ramps cross the input signals are digitized by time-to-digital-converters (TDCs) implemented within the FPGA. The TDC portion of the logic alone has potentially a broad range of HEP/nuclear science applications. A 96-channel TDC card using FPGAs as TDCs being designed for the Fermilab MIPP electronics upgrade project is discussed. A deserializer circuit based on multisampling circuit used in the TDC, the 'Digital Phase Follower' (DPF) is also documented.

  11. FPGA based Smart Wireless MIMO Control System

    NASA Astrophysics Data System (ADS)

    Usman Ali, Syed M.; Hussain, Sajid; Akber Siddiqui, Ali; Arshad, Jawad Ali; Darakhshan, Anjum

    2013-12-01

    In our present work, we have successfully designed, and developed an FPGA based smart wireless MIMO (Multiple Input & Multiple Output) system capable of controlling multiple industrial process parameters such as temperature, pressure, stress and vibration etc. To achieve this task we have used Xilin x Spartan 3E FPGA (Field Programmable Gate Array) instead of conventional microcontrollers. By employing FPGA kit to PC via RF transceivers which has a working range of about 100 meters. The developed smart system is capable of performing the control task assigned to it successfully. We have also provided a provision to our proposed system that can be accessed for monitoring and control through the web and GSM as well. Our proposed system can be equally applied to all the hazardous and rugged industrial environments where a conventional system cannot work effectively.

  12. A Design Framework for Syllabus Generator

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2008-01-01

    A well-designed syllabus provides students with a roadmap for an engaging and successful learning experience, whereas a poorly designed syllabus impedes communication between faculty and students, increases student anxiety and potential complaints, and reduces overall teaching effectiveness. In an effort to facilitate, streamline, and improve…

  13. A Framework for Designing Cluster Randomized Trials with Binary Outcomes

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Martinez, Andres

    2011-01-01

    The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…

  14. Virtual Reality Hypermedia Design Frameworks for Science Instruction.

    ERIC Educational Resources Information Center

    Maule, R. William; Oh, Byron; Check, Rosa

    This paper reports on a study that conceptualizes a research framework to aid software design and development for virtual reality (VR) computer applications for instruction in the sciences. The framework provides methodologies for the processing, collection, examination, classification, and presentation of multimedia information within hyperlinked…

  15. An framework for robust flight control design using constrained optimization

    NASA Technical Reports Server (NTRS)

    Palazoglu, A.; Yousefpor, M.; Hess, R. A.

    1992-01-01

    An analytical framework is described for the design of feedback control systems to meet specified performance criteria in the presence of structured and unstructured uncertainty. Attention is focused upon the linear time invariant, single-input, single-output problem for the purposes of exposition. The framework provides for control of the degree of the stabilizing compensator or controller.

  16. A Computational Framework for Cable Layout Design in Complex Products

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jian-hua; Ning, Ru-xin; Liu, Jia-shun

    The cable layout design in complex products has been challenging because of various strict constraints. In this paper, we present a computationalframework which provides a rich solution for the cable layout problems. The framework centers at the digital mockup of the product,and the digital model of the cable bundle in the productis introduced as an essential part. The design process in the framework is carried out in a virtual environment with a wide range of supporting techniques and tools integrated, including path planning techniques, physically-based model, assembly simulation techniques and more. The techniques and tools respectively emphasizeon different aspects in this problem domain. Besides, the designers play an important role in the framework. They drive the whole design process and make decisions with their knowledge on issues that current techniques cannot solve. A prototype system is developed and applied in practical product development process. The results show that the framework is practical and promising.

  17. A design framework for exploratory geovisualization in epidemiology

    PubMed Central

    Robinson, Anthony C.

    2009-01-01

    This paper presents a design framework for geographic visualization based on iterative evaluations of a toolkit designed to support cancer epidemiology. The Exploratory Spatio-Temporal Analysis Toolkit (ESTAT), is intended to support visual exploration through multivariate health data. Its purpose is to provide epidemiologists with the ability to generate new hypotheses or further refine those they may already have. Through an iterative user-centered design process, ESTAT has been evaluated by epidemiologists at the National Cancer Institute (NCI). Results of these evaluations are discussed, and a design framework based on evaluation evidence is presented. The framework provides specific recommendations and considerations for the design and development of a geovisualization toolkit for epidemiology. Its basic structure provides a model for future design and evaluation efforts in information visualization. PMID:20390052

  18. A Framework for the Design of Service Systems

    NASA Astrophysics Data System (ADS)

    Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris

    We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.

  19. Achieving Equivalence: A Transnational Curriculum Design Framework

    ERIC Educational Resources Information Center

    Clarke, Angela; Johal, Terry; Sharp, Kristen; Quinn, Shayna

    2016-01-01

    Transnational education is now essential to university international development strategies. As a result, tertiary educators are expected to engage with the complexities of diverse cultural contexts, different delivery modes, and mixed student cohorts to design quality learning experiences for all. To support this transition we developed a…

  20. Towards a Framework for Professional Curriculum Design

    ERIC Educational Resources Information Center

    Winch, Christopher

    2015-01-01

    Recent reviews of vocational qualifications in England have noted problems with their restricted nature. However, the underlying issue of how to conceptualise professional agency in curriculum design has not been properly addressed, either by the Richard or the Whitehead reviews. Drawing on comparative work in England and Europe it is argued that…

  1. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  2. CROC FPGA Firmware

    SciTech Connect

    2009-12-01

    The CROC FPGA firmware code controls the operation of CROC hardware primarily deterinining the location of neutron events and discriminating against false trigger by examining the output of multiple analog comparators. A number of stoical algorithms are encode within the firmware to achieve reliable operation. Other communication and control functions are also part of the firmware.

  3. DNA sequence matching processor using FPGA and JAVA interface.

    PubMed

    Brown, Benjamin O; Yin, Meng-Lai; Cheng, Yi

    2004-01-01

    This study uses an FPGA to perform high-speed DNA sequence matching as an alternative to using general purpose computer CPUs. The FPGA is programmed using the Verilog HDL and interfaced using a graphical user interface programmed in JAVA. Design overviews and details for a small scale design are given as well as plans for larger scale expansion. Encouraging results of the small scale model currently in production are also provided. Results of a successful match and no match are shown.

  4. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  5. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. PMID:25956618

  6. Learning Experience as Transaction: A Framework for Instructional Design

    ERIC Educational Resources Information Center

    Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.

    2011-01-01

    This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…

  7. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems.

  8. A Web Services Composition Design framework based on Agent Organization

    NASA Astrophysics Data System (ADS)

    Li, JiaJia; Li, Bin; Zhang, Xiaowei

    Computing environments are becoming more open, distributed and pervasive. The web services compositions build for these dynamic environments will need to become more adaptable and adaptive to unexpected event. This paper defines a way for web services composition which based on agent organization. The functions of three layers, classification of agent, and agent model and agents design in this framework are introduced in details. It realizes a reliable and flexible web services composition using this framework.

  9. Hardware Design and Implementation of Fixed-Width Standard and Truncated 4×4, 6×6, 8×8 and 12×12-BIT Multipliers Using Fpga

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.

    2010-06-01

    This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.

  10. A Portable Laser Photoacoustic Methane Sensor Based on FPGA

    PubMed Central

    Wang, Jianwei; Wang, Huili; Liu, Xianyong

    2016-01-01

    A portable laser photoacoustic sensor for methane (CH4) detection based on a field-programmable gate array (FPGA) is reported. A tunable distributed feedback (DFB) diode laser in the 1654 nm wavelength range is used as an excitation source. The photoacoustic signal processing was implemented by a FPGA device. A small resonant photoacoustic cell is designed. The minimum detection limit (1σ) of 10 ppm for methane is demonstrated. PMID:27657079

  11. Project Assessment Framework through Design (PAFTD) - A Project Assessment Framework in Support of Strategic Decision Making

    NASA Technical Reports Server (NTRS)

    Depenbrock, Brett T.; Balint, Tibor S.; Sheehy, Jeffrey A.

    2014-01-01

    Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our Project Assessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.

  12. From OO to FPGA :

    SciTech Connect

    Kou, Stephen; Palsberg, Jens; Brooks, Jeffrey

    2012-09-01

    Consumer electronics today such as cell phones often have one or more low-power FPGAs to assist with energy-intensive operations in order to reduce overall energy consumption and increase battery life. However, current techniques for programming FPGAs require people to be specially trained to do so. Ideally, software engineers can more readily take advantage of the benefits FPGAs offer by being able to program them using their existing skills, a common one being object-oriented programming. However, traditional techniques for compiling object-oriented languages are at odds with todays FPGA tools, which support neither pointers nor complex data structures. Open until now is the problem of compiling an object-oriented language to an FPGA in a way that harnesses this potential for huge energy savings. In this paper, we present a new compilation technique that feeds into an existing FPGA tool chain and produces FPGAs with up to almost an order of magnitude in energy savings compared to a low-power microprocessor while still retaining comparable performance and area usage.

  13. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  14. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  15. Step-by-Step Design of an FPGA-Based Digital Compensator for DC/DC Converters Oriented to an Introductory Course

    ERIC Educational Resources Information Center

    Zumel, P.; Fernandez, C.; Sanz, M.; Lazaro, A.; Barrado, A.

    2011-01-01

    In this paper, a short introductory course to introduce field-programmable gate array (FPGA)-based digital control of dc/dc switching power converters is presented. Digital control based on specific hardware has been at the leading edge of low-medium power dc/dc switching converters in recent years. Besides industry's interest in this topic, from…

  16. Relativistic framework for non-magnetic analysis and design

    NASA Astrophysics Data System (ADS)

    Laborde, Benjamin

    2005-04-01

    This paper describes a framework for relativistic analysis with effects identical to that of magnetism, but without using magnetism, and uses this framework to design a device which would be difficult or impossible under magnet analysis. With this framework it is possible to analyze electrical systems completely with relativistic electrodynamics, rather than magnetism and electrostatics, with no loss of accuracy, since the two systems are identical. The framework demonstrates the equivalence of magnetism and relativistic electric charge with a mathematical proof using the classical parallel wires experiment. The paper then proceeds to use this result to design an electric propulsion device through relativistic analysis, rather than magnetic analysis. The benefit of this approach is that it liberates us from the magnetic field, and ascribes the forces on a conducting wire to the current in another wire, some distance away, rather than to a magnetic field in the region of the first wire, as in classical analysis. With this new framework we are able to design devices previously unknown in the magnetic domain. The paper describes one such device, the Action Motor, for producing a one-way force, with potential applications in spacecraft propulsion.

  17. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  18. A Review of Literacy Frameworks for Learning Environments Design

    ERIC Educational Resources Information Center

    Rebmann, Kristen Radsliff

    2013-01-01

    This article charts the development of three literacy research frameworks: multiliteracies, new literacies, and popular literacies. By reviewing the literature surrounding three current conceptions of literacy, an attempt is made to form an integrative grouping that captures the most relevant elements of each for learning environments design.…

  19. A Proposed Conceptual Framework for Curriculum Design in Physical Fitness.

    ERIC Educational Resources Information Center

    Miller, Peter V.; Beauchamp, Larry S.

    A physical fitness curriculum, designed to provide cumulative benefits in a sequential pattern, is based upon a framework of a conceptual structure. The curriculum's ultimate goal is the achievement of greater physiological efficiency through a holistic approach that would strengthen circulatory-respiratory, mechanical, and neuro-muscular…

  20. Sustainable Supply Chain Design by the P-Graph Framework

    EPA Science Inventory

    The present work proposes a computer-aided methodology for designing sustainable supply chains in terms of sustainability metrics by resorting to the P-graph framework. The methodology is an outcome of the collaboration between the Office of Research and Development (ORD) of the ...

  1. TARDIS: An Automation Framework for JPL Mission Design and Navigation

    NASA Technical Reports Server (NTRS)

    Roundhill, Ian M.; Kelly, Richard M.

    2014-01-01

    Mission Design and Navigation at the Jet Propulsion Laboratory has implemented an automation framework tool to assist in orbit determination and maneuver design analysis. This paper describes the lessons learned from previous automation tools and how they have been implemented in this tool. In addition this tool has revealed challenges in software implementation, testing, and user education. This paper describes some of these challenges and invites others to share their experiences.

  2. FPGA Based Reconfigurable ATM Switch Test Bed

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Jones, Robert E.

    1998-01-01

    Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.

  3. An FPGA-Based Electronic Cochlea

    NASA Astrophysics Data System (ADS)

    Leong, M. P.; Jin, Craig T.; Leong, Philip H. W.

    2003-12-01

    A module generator which can produce an FPGA-based implementation of an electronic cochlea filter with arbitrary precision is presented. Although hardware implementations of electronic cochlea models have traditionally used analog VLSI as the implementation medium due to their small area, high speed, and low power consumption, FPGA-based implementations offer shorter design times, improved dynamic range, higher accuracy, and a simpler computer interface. The tool presented takes filter coefficients as input and produces a synthesizable VHDL description of an application-optimized design as output. Furthermore, the tool can use simulation test vectors in order to determine the appropriate scaling of the fixed point precision parameters for each filter. The resulting model can be used as an accelerator for research in audition or as the front-end for embedded auditory signal processing systems. The application of this module generator to a real-time cochleagram display is also presented.

  4. A framework for the design of ambulance sirens.

    PubMed

    Catchpole, K; McKeown, D

    2007-08-01

    Ambulance sirens are essential for assisting the safe and rapid arrival of an ambulance at the scene of an emergency. In this study, the parameters upon which sirens may be designed were examined and a framework for emergency vehicle siren design was proposed. Validity for the framework was supported through acoustic measurements and the evaluation of ambulance transit times over 240 emergency runs using two different siren systems. Modifying existing siren sounds to add high frequency content would improve vehicle penetration, detectability and sound localization cues, and mounting the siren behind the radiator grill, rather than on the light bar or under the wheel arch, would provide less unwanted noise while maintaining or improving the effective distance in front of the vehicle. Ultimately, these considerations will benefit any new attempt to design auditory warnings for the emergency services. PMID:17558670

  5. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher

  6. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Pellish, Jonathan

    2015-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  7. FPGA Implementation of Reed-Solomon Decoder for IEEE 802.16 WiMAX Systems using Simulink-Sysgen Design Environment

    SciTech Connect

    Bobrek, Miljko; Albright, Austin P

    2012-01-01

    This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.

  8. SysSon - A Framework for Systematic Sonification Design

    NASA Astrophysics Data System (ADS)

    Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns

    2015-04-01

    SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.

  9. FPGA Flash Memory High Speed Data Acquisition

    NASA Technical Reports Server (NTRS)

    Gonzalez, April

    2013-01-01

    The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.

  10. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  11. A Human Factors Framework for Payload Display Design

    NASA Technical Reports Server (NTRS)

    Dunn, Mariea C.; Hutchinson, Sonya L.

    1998-01-01

    During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.

  12. When Playing Meets Learning: Methodological Framework for Designing Educational Games

    NASA Astrophysics Data System (ADS)

    Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich

    Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.

  13. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; Rogez, Francois; Rosen, Paul; Shah, Biren; Taft, Stephanie

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  14. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  15. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  16. An enhanced BSIM modeling framework for selfheating aware circuit design

    NASA Astrophysics Data System (ADS)

    Schleyer, M.; Leuschner, S.; Baumgartner, P.; Mueller, J.-E.; Klar, H.

    2014-11-01

    This work proposes a modeling framework to enhance the industry-standard BSIM4 MOSFET models with capabilities for coupled electro-thermal simulations. An automated simulation environment extracts thermal information from model data as provided by the semiconductor foundry. The standard BSIM4 model is enhanced with a Verilog-A based wrapper module, adding thermal nodes which can be connected to a thermal-equivalent RC network. The proposed framework allows a fully automated extraction process based on the netlist of the top-level design and the model library. A numerical analysis tool is used to control the extraction flow and to obtain all required parameters. The framework is used to model self-heating effects on a fully integrated class A/AB power amplifier (PA) designed in a standard 65 nm CMOS process. The PA is driven with +30 dBm output power, leading to an average temperature rise of approximately 40 °C over ambient temperature.

  17. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.

  18. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA. PMID:26643079

  19. Ecohydrology frameworks for green infrastructure design and ecosystem service provision

    NASA Astrophysics Data System (ADS)

    Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.

    2014-12-01

    Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.

  20. FPGA-based Hyperspectral Covariance Coprocessor for Size, Weight, and Power Constrained Platforms

    NASA Astrophysics Data System (ADS)

    Kusinsky, David Alan

    Hyperspectral imaging (HSI) is a method of remote sensing that collects many two-dimensional images of the same physical scene. Each image corresponds to a single wavelength band in the electromagnetic spectrum. The number of bands imaged by an HSI sensor can be several hundred, and therefore a large amount of data is produced. This data must be handled by the platform on which the HSI sensor resides, either through onboard processing, or relaying elsewhere. Hence, the platform plays an important role in defining the capabilities of the entire remote sensing system. Size, weight, and power (SWaP) are important factors in the design of any remote sensing platform. These remote sensing platforms, such as Unmanned Air Vehicles and microsatellites, are continually decreasing in size. This creates a need for remote sensing and image processing hardware that consumes less area, weight, and power, while delivering processing performance. The purpose of this research is to design and characterize an FPGA-based hardware coprocessor that parallelizes the calculation of covariance; a time-consuming step common in hyperspectral image processing. The goal is to deploy such a coprocessor on a remote sensing platform. The coprocessor is implemented using a Xilinx ML605 evaluation board. The hardware used includes the Xilinx Virtex-6 FPGA, DDR3 memory, and PCIe interface. An implementation to accelerate the covariance calculation was created, and the OpenCPI open source framework was adopted to enable DDR3 memory and PCIe capabilities and ease coprocessor testing. The coprocessor's performance is evaluated using several metrics: total power (Watts), processing energy (Joules), floating point operations per Watt (FLOPS/W), and floating point operations per Watt-kg (FLOPS/(W·kg)). The coprocessor is compared to a CPU-based processing platform and shown to have an overall SWaP advantage. Coprocessor FLOPS/W and FLOPS/(W·kg) performance is 2X and 2.75X that of the CPU-based platform

  1. Seed Design Framework for Mapping SOLiD Reads

    NASA Astrophysics Data System (ADS)

    Noé, Laurent; Gîrdea, Marta; Kucherov, Gregory

    The advent of high-throughput sequencing technologies constituted a major advance in genomic studies, offering new prospects in a wide range of applications. We propose a rigorous and flexible algorithmic solution to mapping SOLiD color-space reads to a reference genome. The solution relies on an advanced method of seed design that uses a faithful probabilistic model of read matches and, on the other hand, a novel seeding principle especially adapted to read mapping. Our method can handle both lossy and lossless frameworks and is able to distinguish, at the level of seed design, between SNPs and reading errors. We illustrate our approach by several seed designs and demonstrate their efficiency.

  2. A Framework for Designing Scaffolds That Improve Motivation and Cognition

    PubMed Central

    Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.

    2013-01-01

    A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b) problem-based learning and scaffolding as one way to help, (c) how scaffolding has strayed from what was originally equal parts motivational and cognitive support, and (d) a conceptual framework for the design of scaffolds that can enhance motivation as well as cognitive outcomes. We propose guidelines for the design of computer-based scaffolds to promote motivation and engagement while students are solving authentic problems. Remaining questions and suggestions for future research are then discussed. PMID:24273351

  3. FPGA Boot Loader and Scrubber

    NASA Technical Reports Server (NTRS)

    Wade, Randall S.; Jones, Bailey

    2009-01-01

    A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").

  4. Microsystem design framework based on tool adaptations and library developments

    NASA Astrophysics Data System (ADS)

    Karam, Jean Michel; Courtois, Bernard; Rencz, Marta; Poppe, Andras; Szekely, Vladimir

    1996-09-01

    Besides foundry facilities, Computer-Aided Design (CAD) tools are also required to move microsystems from research prototypes to an industrial market. This paper describes a Computer-Aided-Design Framework for microsystems, based on selected existing software packages adapted and extended for microsystem technology, assembled with libraries where models are available in the form of standard cells described at different levels (symbolic, system/behavioral, layout). In microelectronics, CAD has already attained a highly sophisticated and professional level, where complete fabrication sequences are simulated and the device and system operation is completely tested before manufacturing. In comparison, the art of microsystem design and modelling is still in its infancy. However, at least for the numerical simulation of the operation of single microsystem components, such as mechanical resonators, thermo-elements, elastic diaphragms, reliable simulation tools are available. For the different engineering disciplines (like electronics, mechanics, optics, etc) a lot of CAD-tools for the design, simulation and verification of specific devices are available, but there is no CAD-environment within which we could perform a (micro-)system simulation due to the different nature of the devices. In general there are two different approaches to overcome this limitation: the first possibility would be to develop a new framework tailored for microsystem-engineering. The second approach, much more realistic, would be to use the existing CAD-tools which contain the most promising features, and to extend these tools so that they can be used for the simulation and verification of microsystems and of the devices involved. These tools are assembled with libraries in a microsystem design environment allowing a continuous design flow. The approach is driven by the wish to make microsystems accessible to a large community of people, including SMEs and non-specialized academic institutions.

  5. Region-Oriented Placement Algorithm for Coarse-Grained Power-Gating FPGA Architecture

    NASA Astrophysics Data System (ADS)

    Li, Ce; Dong, Yiping; Watanabe, Takahiro

    An FPGA plays an essential role in industrial products due to its fast, stable and flexible features. But the power consumption of FPGAs used in portable devices is one of critical issues. Top-down hierarchical design method is commonly used in both ASIC and FPGA design. But, in the case where plural modules are integrated in an FPGA and some of them might be in sleep-mode, current FPGA architecture cannot be fully effective. In this paper, coarse-grained power gating FPGA architecture is proposed where a whole area of an FPGA is partitioned into several regions and power supply is controlled for each region, so that modules in sleep mode can be effectively power-off. We also propose a region oriented FPGA placement algorithm fitted to this user's hierarchical design based on VPR[1]. Simulation results show that this proposed method could reduce power consumption of FPGA by 38% on average by setting unused modules or regions in sleep mode.

  6. A Robust Control Design Framework for Substructure Models

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    1994-01-01

    A framework for designing control systems directly from substructure models and uncertainties is proposed. The technique is based on combining a set of substructure robust control problems by an interface stiffness matrix which appears as a constant gain feedback. Variations of uncertainties in the interface stiffness are treated as a parametric uncertainty. It is shown that multivariable robust control can be applied to generate centralized or decentralized controllers that guarantee performance with respect to uncertainties in the interface stiffness, reduced component modes and external disturbances. The technique is particularly suited for large, complex, and weakly coupled flexible structures.

  7. Research and design of web application framework based on AJAX

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-feng; Liu, San-jun

    2013-03-01

    AJAX is an emerging presentation layer technology of Web, which allows dynamic, fast, and flexible Web application procedures to be built. AJAX can eliminate the dependence on the form in the tradition HTTP communication mode, which can achieve a fast and lightweight asynchronous communication. This paper firstly introduces the work principle of the AJAX technology, and combines the AJAX technology with the Web services technology to design a new Web application framework based on AJAX, to achieve an asynchronous communication of the browser directly with the back-end services.

  8. Screen Design Guidelines for Motivation in Interactive Multimedia Instruction: A Survey and Framework for Designers.

    ERIC Educational Resources Information Center

    Lee, Sung Heum; Boling, Elizabeth

    1999-01-01

    Identifies guidelines from the literature relating to screen design and design of interactive instructional materials. Describes two types of guidelines--those aimed at enhancing motivation and those aimed at preventing loss of motivation--for typography, graphics, color, and animation and audio. Proposes a framework for considering motivation in…

  9. Design of Functional Materials with Hydrogen-Bonded Host Frameworks

    NASA Astrophysics Data System (ADS)

    Soegiarto, Airon Cosanova

    The properties of molecular crystals are governed by the attributes of their molecular constituents and their solid-state arrangements, making control of crystal packing paramount when designing new materials with targeted functions. One effective strategy involves the use of robust host frameworks that encapsulate functional guests in molecular-scale cavities with tailored shapes, sizes, and chemical environments that enable systematic regulation of solid state properties. This approach promises to simplify the synthesis of molecular materials by decoupling the design of structure, provided by the host framework, from function, introduced by the guests. This thesis has reported a series of crystalline, structurally robust hosts based on guanidinium cations (G = (C(NH2) 3 +) and the sulfonate moieties of organodisulfonate anions (DS; S = -O3S-R-SO3 -). The host framework is based on layers of 2-D GS sheet, which are interconnected by the organic residues (pillars) of the disulfonates, thereby producing a lamellar architecture with inclusion cavities, occupied by guest molecules, between the sheets. Notably, the GDS inclusion compounds exhibit numerous architectures such as bilayer, simple brick, and zigzag brick -- each endowed with uniquely sized and shaped cavities, suggesting that the aggregation motifs of the included guests can be controlled within the host lattice. Furthermore, the selectivity toward different architectures is governed by the relative size of the pillars and guests, allowing the construction of a "structural phase diagram" which can be used to predict the solid-state architecture of untested host-guest combination. Consequently, a variety of functional molecules have been included in order to exploit these features. Chapter 3 reports the inclusion of polyconjugated molecules within the GDS hosts, generating various guest aggregation motifs -- edge-to-edge to face-to-edge to end-to-end. The effects of the various host and/or guest aggregation

  10. Design of crashworthy structures with controlled behavior in HCA framework

    NASA Astrophysics Data System (ADS)

    Bandi, Punit

    The field of crashworthiness design is gaining more interest and attention from automakers around the world due to increasing competition and tighter safety norms. In the last two decades, topology and topometry optimization methods from structural optimization have been widely explored to improve existing designs or conceive new designs with better crashworthiness. Although many gradient-based and heuristic methods for topology- and topometry-based crashworthiness design are available these days, most of them result in stiff structures that are suitable only for a set of vehicle components in which maximizing the energy absorption or minimizing the intrusion is the main concern. However, there are some other components in a vehicle structure that should have characteristics of both stiffness and flexibility. Moreover, the load paths within the structure and potential buckle modes also play an important role in efficient functioning of such components. For example, the front bumper, side frame rails, steering column, and occupant protection devices like the knee bolster should all exhibit controlled deformation and collapse behavior. The primary objective of this research is to develop new methodologies to design crashworthy structures with controlled behavior. The well established Hybrid Cellular Automaton (HCA) method is used as the basic framework for the new methodologies, and compliant mechanism-type (sub)structures are the highlight of this research. The ability of compliant mechanisms to efficiently transfer force and/or motion from points of application of input loads to desired points within the structure is used to design solid and tubular components that exhibit controlled deformation and collapse behavior under crash loads. In addition, a new methodology for controlling the behavior of a structure under multiple crash load scenarios by adaptively changing the contributions from individual load cases is developed. Applied to practical design problems

  11. Small Microprocessor for ASIC or FPGA Implementation

    NASA Technical Reports Server (NTRS)

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  12. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  13. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  14. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.

  15. An Integrated Framework Advancing Membrane Protein Modeling and Design

    PubMed Central

    Weitzner, Brian D.; Duran, Amanda M.; Tilley, Drew C.; Elazar, Assaf; Gray, Jeffrey J.

    2015-01-01

    Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167

  16. An Integrated Framework Advancing Membrane Protein Modeling and Design.

    PubMed

    Alford, Rebecca F; Koehler Leman, Julia; Weitzner, Brian D; Duran, Amanda M; Tilley, Drew C; Elazar, Assaf; Gray, Jeffrey J

    2015-09-01

    Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167

  17. A low-power wave union TDC implemented in FPGA

    SciTech Connect

    Wu, Jinyuan; Shi, Yanchen; Zhu, Douglas; /Illinois Math. Sci. Acad.

    2011-10-01

    A low-power time-to-digital convertor (TDC) for an application inside a vacuum has been implemented based on the Wave Union TDC scheme in a low-cost field programmable gate array (FPGA) device. Bench top tests have shown that a time measurement resolution better than 30 ps (standard deviation of time differences between two channels) is achieved. Special firmware design practices are taken to reduce power consumption. The measurements indicate that with 32 channels fitting in the FPGA device, the power consumption on the FPGA core voltage is approximately 9.3 mW/channel and the total power consumption including both core and I/O banks is less than 27 mW/channel.

  18. Superconducting cavity driving with FPGA controller

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Koprek, Waldemar; Poźniak, Krzysztof T.; Romaniuk, Ryszard S.; Simrock, Stefan; Brandt, Alexander; Chase, Brian; Carcagno, Ruben; Cancelo, Gustavo; Koeth, Timothy W.

    2006-12-01

    A digital control of superconducting cavities for a linear accelerator is presented. FPGA-based controller, supported by Matlab system, was applied. Electrical model of a resonator was used for design of a control system. Calibration of the signal path is considered. Identification of cavity parameters has been carried out for adaptive control algorithm. Feed-forward and feedback modes were applied in operating the cavities. Required performance has been achieved; i.e. driving on resonance during filling and field stabilization during flattop time, while keeping reasonable level of the power consumption. Representative results of the experiments are presented for different levels of the cavity field gradient.

  19. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  20. INSTITUTIONALIZING SAFEGUARDS-BY-DESIGN: HIGH-LEVEL FRAMEWORK

    SciTech Connect

    Trond Bjornard PhD; Joseph Alexander; Robert Bean; Brian Castle; Scott DeMuth, Ph.D.; Phillip Durst; Michael Ehinger; Prof. Michael Golay, Ph.D.; Kevin Hase, Ph.D.; David J. Hebditch, DPhil; John Hockert, Ph.D.; Bruce Meppen; James Morgan; Jerry Phillips, Ph.D., PE

    2009-02-01

    participation in facility design options analysis in the conceptual design phase to enhance intrinsic features, among others. The SBD process is unlikely to be broadly applied in the absence of formal requirements to do so, or compelling evidence of its value. Neither exists today. A formal instrument to require the application of SBD is needed and would vary according to both the national and regulatory environment. Several possible approaches to implementation of the requirements within the DOE framework are explored in this report. Finally, there are numerous barriers to the implementation of SBD, including the lack of a strong safeguards culture, intellectual property concerns, the sensitive nature of safeguards information, and the potentially divergent or conflicting interests of participants in the process. In terms of SBD implementation in the United States, there are no commercial nuclear facilities that are under IAEA safeguards. Efforts to institutionalize SBD must address these issues. Specific work in FY09 could focus on the following: finalizing the proposed SBD process for use by DOE and performing a pilot application on a DOE project in the planning phase; developing regulatory options for mandating SBD; further development of safeguards-related design guidance, principles and requirements; development of a specific SBD process tailored to the NRC environment; and development of an engagement strategy for the IAEA and other international partners.

  1. Analysis and System Design Framework for Infrared Spatial Heterodyne Spectrometers

    SciTech Connect

    Cooke, B.J.; Smith, B.W.; Laubscher, B.E.; Villeneuve, P.V.; Briles, S.D.

    1999-04-05

    The authors present a preliminary analysis and design framework developed for the evaluation and optimization of infrared, Imaging Spatial Heterodyne Spectrometer (SHS) electro-optic systems. Commensurate with conventional interferometric spectrometers, SHS modeling requires an integrated analysis environment for rigorous evaluation of system error propagation due to detection process, detection noise, system motion, retrieval algorithm and calibration algorithm. The analysis tools provide for optimization of critical system parameters and components including : (1) optical aperture, f-number, and spectral transmission, (2) SHS interferometer grating and Littrow parameters, and (3) image plane requirements as well as cold shield, optical filtering, and focal-plane dimensions, pixel dimensions and quantum efficiency, (4) SHS spatial and temporal sampling parameters, and (5) retrieval and calibration algorithm issues.

  2. A Novel Modeling Framework for Heterogeneous Catalyst Design

    NASA Astrophysics Data System (ADS)

    Katare, Santhoji; Bhan, Aditya; Caruthers, James; Delgass, Nicholas; Lauterbach, Jochen; Venkatasubramanian, Venkat

    2002-03-01

    A systems-oriented, integrated knowledge architecture that enables the use of data from High Throughput Experiments (HTE) for catalyst design is being developed. Higher-level critical reasoning is required to extract information efficiently from the increasingly available HTE data and to develop predictive models that can be used for design purposes. Towards this objective, we have developed a framework that aids the catalyst designer in negotiating the data and model complexities. Traditional kinetic and statistical tools have been systematically implemented and novel artificial intelligence tools have been developed and integrated to speed up the process of modeling catalytic reactions. Multiple nonlinear models that describe CO oxidation on supported metals have been screened using qualitative and quantitative features based optimization ideas. Physical constraints of the system have been used to select the optimum model parameters from the multiple solutions to the parameter estimation problem. Preliminary results about the selection of catalyst descriptors that match a target performance and the use of HTE data for refining fundamentals based models will be discussed.

  3. Designing Metal-Organic Frameworks for Catalytic Applications

    NASA Astrophysics Data System (ADS)

    Ma, Liqing; Lin, Wenbin

    Metal-organic frameworks (MOFs) are constructed by linking organic bridging ligands with metal-connecting points to form infinite network structures. Fine tuning the porosities of and functionalities within MOFs through judicious choices of bridging ligands and metal centers has allowed their use as efficient heterogeneous catalysts. This chapter reviews recent developments in designing porous MOFs for a variety of catalytic reactions. Following a brief introduction to MOFs and a comparison between porous MOFs and zeolites, we categorize catalytically active achiral MOFs based on the types of catalytic sites and organic transformations. The unsaturated metal-connecting points in MOFs can act as catalytic sites, so can the functional groups that are built into the framework of a porous MOF. Noble metal nanoparticles can also be entrapped inside porous MOFs for catalytic reactions. Furthermore, the channels of porous MOFs have been used as reaction hosts for photochemical and polymerization reactions. We also summarize the latest results of heterogeneous asymmetric catalysis using homochiral MOFs. Three distinct strategies have been utilized to develop homochiral MOFs for catalyzing enantioselective reactions, namely the synthesis of homochiral MOFs from achiral building blocks by seeding or by statistically manipulating the crystal growth, directing achiral ligands to form homochiral MOFs in chiral environments, and incorporating chiral linker ligands with functionalized groups. The applications of homochiral MOFs in several heterogeneous asymmetric catalytic reactions are also discussed. The ability to synthesize value-added chiral molecules using homochiral MOF catalysts differentiates them from traditional zeolite catalysis, and we believe that in the future many more homochiral MOFs will be designed for catalyzing numerous asymmetric organic transformations.

  4. Framework for Implementing Engineering Senior Design Capstone Courses and Design Clinics

    ERIC Educational Resources Information Center

    Franchetti, Matthew; Hefzy, Mohamed Samir; Pourazady, Mehdi; Smallman, Christine

    2012-01-01

    Senior design capstone projects for engineering students are essential components of an undergraduate program that enhances communication, teamwork, and problem solving skills. Capstone projects with industry are well established in management, but not as heavily utilized in engineering. This paper outlines a general framework that can be used by…

  5. Architectural Design and the Learning Environment: A Framework for School Design Research

    ERIC Educational Resources Information Center

    Gislason, Neil

    2010-01-01

    This article develops a theoretical framework for studying how instructional space, teaching and learning are related in practice. It is argued that a school's physical design can contribute to the quality of the learning environment, but several non-architectural factors also determine how well a given facility serves as a setting for teaching…

  6. Reusable rocket engine intelligent control system framework design, phase 2

    NASA Technical Reports Server (NTRS)

    Nemeth, ED; Anderson, Ron; Ols, Joe; Olsasky, Mark

    1991-01-01

    Elements of an advanced functional framework for reusable rocket engine propulsion system control are presented for the Space Shuttle Main Engine (SSME) demonstration case. Functional elements of the baseline functional framework are defined in detail. The SSME failure modes are evaluated and specific failure modes identified for inclusion in the advanced functional framework diagnostic system. Active control of the SSME start transient is investigated, leading to the identification of a promising approach to mitigating start transient excursions. Key elements of the functional framework are simulated and demonstration cases are provided. Finally, the advanced function framework for control of reusable rocket engines is presented.

  7. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  8. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  9. MicroBlaze implementation of GPS/INS integrated system on Virtex-6 FPGA.

    PubMed

    Bhogadi, Lokeswara Rao; Gottapu, Sasi Bhushana Rao; Konala, Vvs Reddy

    2015-01-01

    The emphasis of this paper is on MicroBlaze implementation of GPS/INS integrated system on Virtex-6 field programmable gate array (FPGA). Issues related to accuracy of position, resource usage of FPGA in terms of slices, DSP48, block random access memory, computation time, latency and power consumption are presented. An improved design of a loosely coupled GPS/INS integrated system is described in this paper. The inertial navigation solution and Kalman filter computations are provided by the MicroBlaze on Virtex-6 FPGA. The real time processed navigation solutions are updated with a rate of 100 Hz.

  10. MicroBlaze implementation of GPS/INS integrated system on Virtex-6 FPGA.

    PubMed

    Bhogadi, Lokeswara Rao; Gottapu, Sasi Bhushana Rao; Konala, Vvs Reddy

    2015-01-01

    The emphasis of this paper is on MicroBlaze implementation of GPS/INS integrated system on Virtex-6 field programmable gate array (FPGA). Issues related to accuracy of position, resource usage of FPGA in terms of slices, DSP48, block random access memory, computation time, latency and power consumption are presented. An improved design of a loosely coupled GPS/INS integrated system is described in this paper. The inertial navigation solution and Kalman filter computations are provided by the MicroBlaze on Virtex-6 FPGA. The real time processed navigation solutions are updated with a rate of 100 Hz. PMID:26543763

  11. Rethinking modeling framework design: object modeling system 3.0

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Object Modeling System (OMS) is a framework for environmental model development, data provisioning, testing, validation, and deployment. It provides a bridge for transferring technology from the research organization to the program delivery agency. The framework provides a consistent and efficie...

  12. Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2009-01-01

    Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…

  13. Professional Development of Instructional Designers: A Proposed Framework Based on a Singapore Study

    ERIC Educational Resources Information Center

    Cheong, Eleen; Wettasinghe, Marissa C.; Murphy, James

    2006-01-01

    This article presents a professional development action plan or framework for instructional designers (IDs) working as external consultants for corporate companies. It also describes justifications why such an action plan is necessary for these professionals. The framework aims to help practising instructional designers to continuously and…

  14. Development and Application of a Systems Engineering Framework to Support Online Course Design and Delivery

    ERIC Educational Resources Information Center

    Bozkurt, Ipek; Helm, James

    2013-01-01

    This paper develops a systems engineering-based framework to assist in the design of an online engineering course. Specifically, the purpose of the framework is to provide a structured methodology for the design, development and delivery of a fully online course, either brand new or modified from an existing face-to-face course. The main strength…

  15. Unified Simulation and Analysis Framework for Deep Space Navigation Design

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan; Chuang, Jason; Olsen, Carrie

    2013-01-01

    As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.

  16. Photoelectric radar servo control system based on ARM+FPGA

    NASA Astrophysics Data System (ADS)

    Wu, Kaixuan; Zhang, Yue; Li, Yeqiu; Dai, Qin; Yao, Jun

    2016-01-01

    In order to get smaller, faster, and more responsive requirements of the photoelectric radar servo control system. We propose a set of core ARM + FPGA architecture servo controller. Parallel processing capability of FPGA to be used for the encoder feedback data, PWM carrier modulation, A, B code decoding processing and so on; Utilizing the advantage of imaging design in ARM Embedded systems achieves high-speed implementation of the PID algorithm. After the actual experiment, the closed-loop speed of response of the system cycles up to 2000 times/s, in the case of excellent precision turntable shaft, using a PID algorithm to achieve the servo position control with the accuracy of + -1 encoder input code. Firstly, This article carry on in-depth study of the embedded servo control system hardware to determine the ARM and FPGA chip as the main chip with systems based on a pre-measured target required to achieve performance requirements, this article based on ARM chip used Samsung S3C2440 chip of ARM7 architecture , the FPGA chip is chosen xilinx's XC3S400 . ARM and FPGA communicate by using SPI bus, the advantage of using SPI bus is saving a lot of pins for easy system upgrades required thereafter. The system gets the speed datas through the photoelectric-encoder that transports the datas to the FPGA, Then the system transmits the datas through the FPGA to ARM, transforms speed datas into the corresponding position and velocity data in a timely manner, prepares the corresponding PWM wave to control motor rotation by making comparison between the position data and the velocity data setted in advance . According to the system requirements to draw the schematics of the photoelectric radar servo control system and PCB board to produce specially. Secondly, using PID algorithm to control the servo system, the datas of speed obtained from photoelectric-encoder is calculated position data and speed data via high-speed digital PID algorithm and coordinate models. Finally, a

  17. Concepts for designing and fabricating metal implant frameworks for hybrid implant prostheses.

    PubMed

    Drago, Carl; Howell, Kent

    2012-07-01

    Edentulous patients have reported difficulties in managing complete dentures; they have also reported functional concerns and higher expectations regarding complete dentures than the dentists who have treated them. Some of the objectives of definitive fixed implant prosthodontic care include predictable, long-term prostheses, improved function, and maintenance of alveolar bone. One of the keys to long-term clinical success is the design and fabrication of metal frameworks that support implant prostheses. Multiple, diverse methods have been reported regarding framework design in implant prosthodontics. Original designs were developed empirically, without the benefit of laboratory testing. Prosthetic complications reported after occlusal loading included screw loosening, screw fracture, prosthesis fracture, crestal bone loss around implants, and implant loss. Numerous authors promoted accurately fitting frameworks; however, it has been noted that metal frameworks do not fit accurately. Passively fitting metal implant frameworks and implants have not been realized. Biologic consequences of ill-fitting frameworks were not well understood. Basic engineering principles were then incorporated into implant framework designs; however, laboratory testing was lacking. It has been reported that I- and L-beam designs were the best clinical option. With the advent of CAD/CAM protocols, milled titanium frameworks became quite popular in implant prosthodontics. The purpose of this article is to discuss current and past literature regarding implant-retained frameworks for full-arch, hybrid restorations. Benefits, limitations, and complications associated with this type of prosthesis will be reviewed. This discussion will include the relative inaccuracy of casting/implant fit and improved accuracy noted with CAD/CAM framework/implant fit; cantilever extensions relative to the A/P implant spread; and mechanical properties associated with implant frameworks including I- and L

  18. Design Framework for an Adaptive MOOC Enhanced by Blended Learning: Supplementary Training and Personalized Learning for Teacher Professional Development

    ERIC Educational Resources Information Center

    Gynther, Karsten

    2016-01-01

    The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…

  19. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  20. CCD fiber Bragg grating sensor demodulation system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Ning, T. G.; Pei, L.; Li, J.; Wen, X. D.; Li, Z. X.

    2010-11-01

    A CCD fiber Bragg grating sensor demodulation system based on FPGA is proposed. The system is divided into three units: spectral imaging unit, signal detection unit and signal acquisition and processing unit. The spectral imaging unit uses reflective imaging system, which has few aberration, small size, simple structure and low cost. In the signal detection unit, information of spectrum are accessed by CCD detector, the measurement of spectral line is converted into the measurement of the pixel position of spot, multi point can be simultaneously measured, so the system's reusability, stability and reliability are improved. In the signal acquisition and processing unit, drive circuit and signal acquisition and processing circuit are designed by programmable logic device FPGA, fully use of programmable and high real-time features, simplified system design, improved the system's real-time monitoring capabilities and demodulation speed.

  1. Application of FPGA technology to performance limitations in radiation therapy

    NASA Astrophysics Data System (ADS)

    DeMarco, John J.; Smathers, J. B.; Solberg, Tim D.; Casselman, Steve

    1996-10-01

    The field programmable gate array (FPGA) is a promising technology for increasing computation performance by providing for the design of custom chips through programmable logic blocks. This technology was used to implement and test a hardware random number generator (RNG) versus four software algorithms. The custom hardware consists of a sun SBus-based board (EVC) which has been designed around a Xilinx FPGA. A timing analysis indicates the Sun/EVC hardware generator computes 1 multiplied by 106 random numbers approximately 50 times faster than the multiplicative congruential algorithm. The hardware and software RNGs were also compare using a Monte Carlo photon transport algorithm. For this comparison the Sun/EVC generator produces a performance increase of approximately 2.0 versus the software generators. This comparison is based upon 1 multiplied by 105 photon histories.

  2. Single Event Testing on Complex Devices: Test Like You Fly versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.

    2014-01-01

    We present a framework for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  3. Wire Position Monitoring with FPGA based Electronics

    SciTech Connect

    Eddy, N.; Lysenko, O.; /Fermilab

    2009-01-01

    This fall the first Tesla-style cryomodule cooldown test is being performed at Fermilab. Instrumentation department is preparing the electronics to handle the data from a set of wire position monitors (WPMs). For simulation purposes a prototype pipe with a WMP has been developed and built. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The wire is stretched along the pipe with a tensioning load of 9.07 kg. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. FPGA based digitizer scans the WPM and transmits the data to a PC via VME interface. The data acquisition is based on the PC running LabView. In order to increase the accuracy and convenience of the measurements some modifications were required. The first is implementation of an average and decimation filter algorithm in the integrator operation in the FPGA. The second is the development of alternative tool for WPM measurements in the PC. The paper describes how these modifications were performed and test results of a new design. The last cryomodule generation has a single chain of seven WPMs (placed in critical positions: at each end, at the three posts and between the posts) to monitor a cold mass displacement during cooldown. The system was developed in Italy in collaboration with DESY. Similar developments have taken place at Fermilab in the frame of cryomodules construction for SCRF research. This fall preliminary cryomodule cooldown test is being performed. In order to prepare an appropriate electronic system for the test a prototype pipe with a WMP has been developed and built, figure 1. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The 0.5 mm diameter Cu wire is stretched along the pipe with a tensioning load of 9.07 kg and has a length of 1.1 m. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. An FPGA based

  4. A design thinking framework for healthcare management and innovation.

    PubMed

    Roberts, Jess P; Fisher, Thomas R; Trowbridge, Matthew J; Bent, Christine

    2016-03-01

    The business community has learned the value of design thinking as a way to innovate in addressing people's needs--and health systems could benefit enormously from doing the same. This paper lays out how design thinking applies to healthcare challenges and how systems might utilize this proven and accessible problem-solving process. We show how design thinking can foster new approaches to complex and persistent healthcare problems through human-centered research, collective and diverse teamwork and rapid prototyping. We introduce the core elements of design thinking for a healthcare audience and show how it can supplement current healthcare management, innovation and practice.

  5. A design thinking framework for healthcare management and innovation.

    PubMed

    Roberts, Jess P; Fisher, Thomas R; Trowbridge, Matthew J; Bent, Christine

    2016-03-01

    The business community has learned the value of design thinking as a way to innovate in addressing people's needs--and health systems could benefit enormously from doing the same. This paper lays out how design thinking applies to healthcare challenges and how systems might utilize this proven and accessible problem-solving process. We show how design thinking can foster new approaches to complex and persistent healthcare problems through human-centered research, collective and diverse teamwork and rapid prototyping. We introduce the core elements of design thinking for a healthcare audience and show how it can supplement current healthcare management, innovation and practice. PMID:27001093

  6. Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework

    PubMed Central

    Peeples, Malinda M; Anthony Kouyaté, Robin C

    2016-01-01

    Background Mobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes. Objective The purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease. Methods The process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described. Results The result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management. Conclusions The framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to

  7. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  8. A Scaffolding Design Framework for Software to Support Science Inquiry

    ERIC Educational Resources Information Center

    Quintana, Chris; Reiser, Brian J.; Davis, Elizabeth A.; Krajcik, Joseph; Fretz, Eric; Duncan, Ravit Golan; Kyza, Eleni; Edelson, Daniel; Soloway, Elliot

    2004-01-01

    The notion of scaffolding learners to help them succeed in solving problems otherwise too difficult for them is an important idea that has extended into the design of scaffolded software tools for learners. However, although there is a growing body of work on scaffolded tools, scaffold design, and the impact of scaffolding, the field has not yet…

  9. Adapting the Mathematical Task Framework to Design Online Didactic Objects

    ERIC Educational Resources Information Center

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-01-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where…

  10. An Exposition of Current Mobile Learning Design Guidelines and Frameworks

    ERIC Educational Resources Information Center

    Teall, Ed; Wang, Minjuan; Callaghan, Vic; Ng, Jason W. P.

    2014-01-01

    As mobile devices with wireless access become more readily available, learning delivered via mobile devices of all types must be designed to ensure successful learning. This paper first examines three questions related to the design of mobile learning: 1) what mobile learning (m-learning) guidelines can be identified in the current literature, 2)…

  11. Design and Performance Frameworks for Constructing Problem-Solving Simulations

    ERIC Educational Resources Information Center

    Stevens, Rons; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks…

  12. A Framework for Web 2.0 Learning Design

    ERIC Educational Resources Information Center

    Bower, Matt; Hedberg, John G.; Kuswara, Andreas

    2010-01-01

    This paper describes an approach to conceptualising and performing Web 2.0-enabled learning design. Based on the Technological, Pedagogical and Content Knowledge model of educational practice, the approach conceptualises Web 2.0 learning design by relating Anderson and Krathwohl's Taxonomy of Learning, Teaching and Assessing, and different types…

  13. Adapting the mathematical task framework to design online didactic objects

    NASA Astrophysics Data System (ADS)

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-06-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where 'discussions' are broadly defined as the conversations students have with themselves as they interact with the dynamic mathematical representations on the screen. Eighty-four pre-service elementary teachers enrolled in hybrid mathematics courses were asked to interact with a series of applets designed to support their understanding of qualitative graphing. The results of the surveys indicate that various design features of the applets did in fact cause perturbations and opportunities for resolutions that enabled the users to 'discuss' their learning by reflecting on their in-class discussions and online activities. The discussion includes four design features for guiding future applet creation.

  14. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  15. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    ERIC Educational Resources Information Center

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-01-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a…

  16. An Integration of "Backwards Planning" Unit Design with the "Two-Step" Lesson Planning Framework

    ERIC Educational Resources Information Center

    Jones, Karrie A.; Vermette, Paul J.; Jones, Jennifer L.

    2009-01-01

    Planning engaging and effective lessons for middle and high school learners is one of the fundamental components of successful secondary teaching (Skowron 2001; Butt, 2006). While Wiggins & McTighe (1998) have set forth a framework for "backwards planning" in unit design, this article provides a framework for employing backwards planning in…

  17. "Light Green Doesn't Mean Hydrology!": Toward a Visual-Rhetorical Framework for Interface Design.

    ERIC Educational Resources Information Center

    Spinuzzi, Clay

    2001-01-01

    Examines metaphor's limitations as a visual-rhetorical framework for designing, evaluating, and critiquing user interfaces. Outlines an alternate framework for visual rhetoric, that of genre ecologies, and discusses how it avoids some of the limitations of metaphor. Uses an empirical study of computer users to illustrate the genre-ecology…

  18. A Framework for the Design and Integration of Collaborative Classroom Games

    ERIC Educational Resources Information Center

    Echeverria, Alejandro; Garcia-Campo, Cristian; Nussbaum, Miguel; Gil, Francisca; Villalta, Marco; Amestica, Matias; Echeverria, Sebastian

    2011-01-01

    The progress registered in the use of video games as educational tools has not yet been successfully transferred to the classroom. In an attempt to close this gap, a framework was developed that assists in the design and classroom integration of educational games. The framework addresses both the educational dimension and the ludic dimension. The…

  19. Investigating the Reading Practices of EFL Yemeni Students Using the Learning by Design Framework

    ERIC Educational Resources Information Center

    Bhooth, Abdullah Mohammad; Azman, Hazita; Ismail, Kemboja

    2015-01-01

    This article investigates the reading practices of 45 EFL Yemeni students using the "learning by design" framework. The framework organizes the teaching and learning of literacy into four processes: experiencing, conceptualising, analysing, and applying. Quantitative and qualitative methods were used to collect data on a sample of…

  20. A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets

    ERIC Educational Resources Information Center

    Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.

    2013-01-01

    This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…

  1. A Conceptual Framework for Educational Design at Modular Level to Promote Transfer of Learning

    ERIC Educational Resources Information Center

    Botma, Yvonne; Van Rensburg, G. H.; Coetzee, I. M.; Heyns, T.

    2015-01-01

    Students bridge the theory-practice gap when they apply in practice what they have learned in class. A conceptual framework was developed that can serve as foundation to design for learning transfer at modular level. The framework is based on an adopted and adapted systemic model of transfer of learning, existing learning theories, constructive…

  2. Revisiting the Concepts "Approach", "Design" and "Procedure" According to the Richards and Rodgers (2011) Framework

    ERIC Educational Resources Information Center

    Cumming, Brett

    2012-01-01

    The three concepts Approach, Design and Procedure as proposed in Rodgers' Framework are considered particularly effective as a framework in second language teaching with the specific aim of developing communication as well as for better understanding methodology in the use of communicative language use.

  3. Adventure Learning and Learner-Engagement: Frameworks for Designers and Educators

    ERIC Educational Resources Information Center

    Henrickson, Jeni; Doering, Aaron

    2013-01-01

    There is a recognized need for theoretical frameworks that can guide designers and educators in the development of engagement-rich learning experiences that incorporate emerging technologies in pedagogically sound ways. This study investigated one such promising framework, adventure learning (AL). Data were gathered via surveys, interviews, direct…

  4. An FPGA architecture for MPEG-2 TS demultiplexer

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2012-05-01

    This paper presents a novel architecture of a MPEG-2 TS demultiplexer, implemented with a FPGA. The main objective of the design is an ability to separate selected elementary streams in real time, while ensuring minimal resource consumption. This is achieved by the decomposition of the demultiplexer into a number of independent sub-modules, which process the data in parallel. The flexible structure enables adaptation to the specific needs and significantly simplifies potential expansion, what may be important due to a wide range of potential applications of the MPEG-2 TS standard. To improve the functionality, the demultiplexer is equipped with a configuration and status interface. The transport stream and configuration data are supplied to the FPGA by a microcontroller through the External Peripheral Interface (EPI). The data is transmitted to the microcontroller via Ethernet, using the User Datagram Protocol (UDP).

  5. Passive coherent location FPGA implementation of the cross ambiguity function

    NASA Astrophysics Data System (ADS)

    Kvasnička, Michal; Heřmánek, Antonín; Kuneš, Michal; Pelant, Martin; Plšek, Radek

    2006-02-01

    One of key problem in passive coherent location (PCL) is effective and accurate computation of the cross ambiguity function (CAF). This function is related to the direct signal and signals reflected from localized targets. PCL systems exploit high-power commercial transmitters of opportunity (FM, TV, etc.) to take advantage of lower frequencies, multistatic geometries and covert deployment. The transmitter does not have to cooperate with the receiver. The CAF represent power spectral density distribution of the cross-correlation between direct and reflected signals. It depends on mutual time delay and frequency shift of the input signals and is considerate as primary information for detection, localization and identification of the tracked targets. Regarding above mentioned reasons has to be important develop optimal (numerically effective and sufficiently accurate) implementation of the HW architecture based on FPGA for CAF computation, which will be suitable for future real-time PCL systems. As a first result which originates on the ongoing mutual cooperation between ERA a.s. and UTIA is design of the PC accelerator card for CAF computation based on Xilinx FPGA processor. The presented contribution gives overall information about used algorithms, FPGA accelerator card design and achieved performance. The future possibilities of the additional enhancements are discussed.

  6. An FPGA Implementation to Detect Selective Cationic Antibacterial Peptides

    PubMed Central

    Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel

    2011-01-01

    Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides. PMID:21738652

  7. A sampling design framework for monitoring secretive marshbirds

    USGS Publications Warehouse

    Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.

    2009-01-01

    A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.

  8. 6-D, A Process Framework for the Design and Development of Web-based Systems.

    ERIC Educational Resources Information Center

    Christian, Phillip

    2001-01-01

    Explores how the 6-D framework can form the core of a comprehensive systemic strategy and help provide a supporting structure for more robust design and development while allowing organizations to support whatever methods and models best suit their purpose. 6-D stands for the phases of Web design and development: Discovery, Definition, Design,…

  9. Presence+Experience: A Framework for the Purposeful Design of Presence in Online Courses

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.; Verma, Geeta; Johnson, Heather Lynn

    2016-01-01

    In this article, we share a framework for the purposeful design of presence in online courses. Instead of developing something new, we looked at two models that have helped us with previous instructional design projects, providing us with some assurance that the design decisions we were making were fundamentally sound. As we began to work with the…

  10. Design of Mobile Augmented Reality in Health Care Education: A Theory-Driven Framework

    PubMed Central

    Lilienthal, Anneliese; Shluzas, Lauren Aquino; Masiello, Italo; Zary, Nabil

    2015-01-01

    Background Augmented reality (AR) is increasingly used across a range of subject areas in health care education as health care settings partner to bridge the gap between knowledge and practice. As the first contact with patients, general practitioners (GPs) are important in the battle against a global health threat, the spread of antibiotic resistance. AR has potential as a practical tool for GPs to combine learning and practice in the rational use of antibiotics. Objective This paper was driven by learning theory to develop a mobile augmented reality education (MARE) design framework. The primary goal of the framework is to guide the development of AR educational apps. This study focuses on (1) identifying suitable learning theories for guiding the design of AR education apps, (2) integrating learning outcomes and learning theories to support health care education through AR, and (3) applying the design framework in the context of improving GPs’ rational use of antibiotics. Methods The design framework was first constructed with the conceptual framework analysis method. Data were collected from multidisciplinary publications and reference materials and were analyzed with directed content analysis to identify key concepts and their relationships. Then the design framework was applied to a health care educational challenge. Results The proposed MARE framework consists of three hierarchical layers: the foundation, function, and outcome layers. Three learning theories—situated, experiential, and transformative learning—provide foundational support based on differing views of the relationships among learning, practice, and the environment. The function layer depends upon the learners’ personal paradigms and indicates how health care learning could be achieved with MARE. The outcome layer analyzes different learning abilities, from knowledge to the practice level, to clarify learning objectives and expectations and to avoid teaching pitched at the wrong level

  11. A preliminary report of designing removable partial denture frameworks using a specifically developed software package.

    PubMed

    Han, Jing; Wang, Yong; Lü, Peijun

    2010-01-01

    This article reports on a method to digitally survey and build virtual patterns for removable partial denture (RPD) frameworks using a new three-dimensional (3D) computer-aided design/computer-assisted manufacturing (CAD/CAM) software package developed specifically for RPD design. The procedure included obtaining 3D data from partially dentate casts, deciding on the path of insertion, and modeling the shape of the components of the frameworks digitally. The completed model data were stored as stereolithography (STL) files, which are commonly used in transferring CAD/CAM models to rapid prototyping technologies. Finally, metal RPD frameworks were fabricated using a selective laser melting technique.

  12. A Framework for Designing Scaffolds that Improve Motivation and Cognition

    ERIC Educational Resources Information Center

    Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.

    2013-01-01

    A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b)…

  13. A Framework for Promoting Learning in IS Design and Implementation

    ERIC Educational Resources Information Center

    Small, Adrian; Sice, Petia; Venus, Tony

    2008-01-01

    Purpose: The purpose of this paper is to set out an argument for a way to design, implement and manage IS with an emphasis on first, the learning that can be created through undertaking the approach, and second, the learning that may be created through using the IS that was implemented. The paper proposes joining two areas of research namely,…

  14. Economical Implementation of a Filter Engine in an FPGA

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    2009-01-01

    A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be

  15. Design and Implementation of Telemedicine based on Java Media Framework

    NASA Astrophysics Data System (ADS)

    Xiong, Fengguang; Jia, Zhiyan

    According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.

  16. The Modern Design of Experiments: A Technical and Marketing Framework

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.

  17. A framework for designing a healthcare outcome data warehouse.

    PubMed

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-09-06

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making.

  18. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  19. The azaindole framework in the design of kinase inhibitors.

    PubMed

    Mérour, Jean-Yves; Buron, Frédéric; Plé, Karen; Bonnet, Pascal; Routier, Sylvain

    2014-01-01

    This review article illustrates the growing use of azaindole derivatives as kinase inhibitors and their contribution to drug discovery and innovation. The different protein kinases which have served as targets and the known molecules which have emerged from medicinal chemistry and Fragment-Based Drug Discovery (FBDD) programs are presented. The various synthetic routes used to access these compounds and the chemical pathways leading to their synthesis are also discussed. An analysis of their mode of binding based on X-ray crystallography data gives structural insights for the design of more potent and selective inhibitors. PMID:25460315

  20. Rad-Hard/HI-REL FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian E.; McGowan, John E.; Katz, Richard B.

    1997-01-01

    The goals for a radiation hardened (RAD-HARD) and high reliability (HI-REL) field programmable gate array (FPGA) are described. The first qualified manufacturer list (QML) radiation hardened RH1280 and RH1020 were developed. The total radiation dose and single event effects observed on the antifuse FPGA RH1280 are reported on. Tradeoffs and the limitations in the single event upset hardening are discussed.

  1. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design.

    PubMed

    Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna

    2015-12-18

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  2. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design.

    PubMed

    Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna

    2015-01-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives. PMID:26681142

  3. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    NASA Astrophysics Data System (ADS)

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; Lebeau, James M.; Rajan, Krishna

    2015-12-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  4. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design

    PubMed Central

    Srinivasan, Srikant; Broderick, Scott R.; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B.; Saxena, Surendra K.; LeBeau, James M.; Rajan, Krishna

    2015-01-01

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives. PMID:26681142

  5. A design framework for teleoperators with kinesthetic feedback

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake

    1989-01-01

    The application of a hybrid two-port model to teleoperators with force and velocity sensing at the master and slave is presented. The interfaces between human operator and master, and between environment and slave, are ports through which the teleoperator is designed to exchange energy between the operator and the environment. By computing or measuring the input-output properties of this two-port network, the hybrid two-port model of an actual or simulated teleoperator system can be obtained. It is shown that the hybrid model (as opposed to other two-port forms) leads to an intuitive representation of ideal teleoperator performace and applies to several teleoperator architectures. Thus measured values of the h matrix or values computed from a simulation can be used to compare performance with th ideal. The frequency-dependent h matrix is computed from a detailed SPICE model of an actual system, and the method is applied to a proposed architecture.

  6. Using Magnet® as a framework for nurse participation in facility design.

    PubMed

    Stichler, Jaynelle F

    2015-01-01

    Magnet® Model component NE6EO indicates that nurses should be involved in workflow improvements and space design to improve nursing practice. The Magnet Model can be used as a framework for ensuring that the structures and processes are in place to support nurses' participation in the design of new facilities or remodel spaces. PMID:25479170

  7. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    ERIC Educational Resources Information Center

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  8. Using Magnet® as a framework for nurse participation in facility design.

    PubMed

    Stichler, Jaynelle F

    2015-01-01

    Magnet® Model component NE6EO indicates that nurses should be involved in workflow improvements and space design to improve nursing practice. The Magnet Model can be used as a framework for ensuring that the structures and processes are in place to support nurses' participation in the design of new facilities or remodel spaces.

  9. The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.

    ERIC Educational Resources Information Center

    New York Association of Training and Employment Professionals, Albany.

    This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…

  10. A User-Centered Methodological Framework for the Design of Hypermedia-based CALL Systems.

    ERIC Educational Resources Information Center

    Shin, Jae-eun; Wastell, David G.

    2001-01-01

    Discusses research aimed at improving the educational quality of hypermedia-based computer assisted language learning systems. Focuses on a methodological framework that draws on recent developments in the field of human-computer interaction regarding interactive system design and a general constructivist approach to the design of computer-based…

  11. Framework for Organization and Control of Capstone Design/Build Projects

    ERIC Educational Resources Information Center

    Massie, Darrell D.; Massie, Cheryl A.

    2006-01-01

    Senior design capstone projects frequently require team members to self-organize for a project and then execute the design/build portion with limited resources. This is challenging for inexperienced students who struggle with technical as well as program management and team building issues. This paper outlines a general framework that can be used…

  12. Designing Online Management Education Courses Using the Community of Inquiry Framework

    ERIC Educational Resources Information Center

    Weyant, Lee E.

    2013-01-01

    Online learning has grown as a program delivery option for many colleges and programs of business. The Community of Inquiry (CoI) framework consisting of three interrelated elements--social presence, cognitive presence, and teaching presences--provides a model to guide business faculty in their online course design. The course design of an online…

  13. Serious Games for Higher Education: A Framework for Reducing Design Complexity

    ERIC Educational Resources Information Center

    Westera, W.; Nadolski, R. J.; Hummel, H. G. K.; Wopereis, I. G. J. H.

    2008-01-01

    Serious games open up many new opportunities for complex skills learning in higher education. The inherent complexity of such games, though, requires large efforts for their development. This paper presents a framework for serious game design, which aims to reduce the design complexity at conceptual, technical and practical levels. The approach…

  14. Tethered Forth system for FPGA applications

    NASA Astrophysics Data System (ADS)

    Goździkowski, Paweł; Zabołotny, Wojciech M.

    2013-10-01

    This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.

  15. Evaluating a Professional Development Framework to Empower Chemistry Teachers to Design Context-Based Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert

    2012-07-01

    Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a professional development programme intended to empower teachers to design context-based chemistry education. The programme consists of teaching a pre-developed context-based unit, followed by teachers designing an outline of a new context-based unit. The study investigates the process of teacher empowerment during the implementation of the programme. Data were obtained from meetings, classroom discussions and observations. The findings indicated that teachers became empowered to design new context-based units provided they had sufficient time and resources. The contribution of the framework to teacher empowerment is discussed.

  16. RIPOSTE: a framework for improving the design and analysis of laboratory-based research.

    PubMed

    Masca, Nicholas Gd; Hensor, Elizabeth Ma; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam Ka; Teare, M Dawn

    2015-01-01

    Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results.

  17. Designing smart analytical data services for a personal health framework.

    PubMed

    Koumakis, Lefteris; Kondylakis, Haridimos; Chatzimina, Maria; Iatraki, Galatia; Argyropaidas, Panagiotis; Kazantzaki, Eleni; Tsiknakis, Manolis; Kiefer, Stephan; Marias, Kostas

    2016-01-01

    Information in the healthcare domain and in particular personal health record information is heterogeneous by nature. Clinical, lifestyle, environmental data and personal preferences are stored and managed within such platforms. As a result, significant information from such diverse data is difficult to be delivered, especially to non-IT users like patients, physicians or managers. Another issue related to the management and analysis is the volume, which increases more and more making the need for efficient data visualization and analysis methods mandatory. The objective of this work is to present the architectural design for seamless integration and intelligent analysis of distributed and heterogeneous clinical information in the PHR context, as a result of a requirements elicitation process in iManageCancer project. This systemic approach aims to assist health-care professionals to orient themselves in the disperse information space and enhance their decision-making capabilities, to encourage patients to have an active role by managing their health information and interacting with health-care professionals.

  18. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view.

  19. Lung-MAP--framework, overview, and design principles.

    PubMed

    Ferrarotto, Renata; Redman, Mary W; Gandara, David R; Herbst, Roy S; Papadimitrakopoulou, Vassiliki A

    2015-09-01

    Metastatic lung squamous cell carcinoma (SCC) is a common disease with limited therapeutic options and poor patient outcomes. Standard "all comers" clinical trial designs usually benefit only a small population sub-group. Targeted-therapy matched clinical trials have a higher potential to achieve better results, however, given the low frequency of driver genetic alterations, they are associated with a large number of screen-failures, are not cost-effective, and frequently not feasible. Lung-MAP is an umbrella master protocol for recurrent or metastatic lung SCC patients that uses a central genomic profiling screening platform to allocate patients to phase II/III biomarker-matched target therapy clinical trials or to a "non-match" treatment arm; therefore, all eligible patients screened can be treated under the protocol. If evidence of efficacy is seen in the phase II trial portion for a particular treatment/marker combination, that sub-study moves directly to phase III and incorporates the patients treated in phase II. Lung-MAP has an efficient and adaptable structure that allows for sub-studies to open and close based on changes in an evolving cancer research field. It also provides a path for FDA-approval in order to bring promising agents to clinic in a time efficient manner, with the ultimate goal of significantly improving lung SCC patient's quality and length of life.

  20. Designing smart analytical data services for a personal health framework.

    PubMed

    Koumakis, Lefteris; Kondylakis, Haridimos; Chatzimina, Maria; Iatraki, Galatia; Argyropaidas, Panagiotis; Kazantzaki, Eleni; Tsiknakis, Manolis; Kiefer, Stephan; Marias, Kostas

    2016-01-01

    Information in the healthcare domain and in particular personal health record information is heterogeneous by nature. Clinical, lifestyle, environmental data and personal preferences are stored and managed within such platforms. As a result, significant information from such diverse data is difficult to be delivered, especially to non-IT users like patients, physicians or managers. Another issue related to the management and analysis is the volume, which increases more and more making the need for efficient data visualization and analysis methods mandatory. The objective of this work is to present the architectural design for seamless integration and intelligent analysis of distributed and heterogeneous clinical information in the PHR context, as a result of a requirements elicitation process in iManageCancer project. This systemic approach aims to assist health-care professionals to orient themselves in the disperse information space and enhance their decision-making capabilities, to encourage patients to have an active role by managing their health information and interacting with health-care professionals. PMID:27225566

  1. Design theoretic analysis of three system modeling frameworks.

    SciTech Connect

    McDonald, Michael James

    2007-05-01

    This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.

  2. a Novel Framework for Incorporating Sustainability Into Biomass Feedstock Design

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, G.; Negri, C.

    2012-12-01

    There is a strong society need to evaluate and understand the sustainability of biofuels, especially due to the significant increases in production mandated by many countries, including the United States. Biomass feedstock production is an important contributor to environmental, social and economic impacts from biofuels. We present a systems approach where the agricultural, urban, energy and environmental sectors are considered as components of a single system and environmental liabilities are used as recoverable resources for biomass feedstock production. A geospatial analysis evaluating marginal land and degraded water resources to improve feedstock productivity with concomitant environmental restoration was conducted for the major corn producing states in the US. The extent and availability of these resources was assessed and geospatial techniques used to identify promising opportunities to implement this approach. Utilizing different sources of marginal land (roadway buffers, contaminated land) could result in a 7-fold increase in land availability for feedstock production and provide ecosystem services such as water quality improvement and carbon sequestration. Spatial overlap between degraded water and marginal land resources was found to be as high as 98% and could maintain sustainable feedstock production on marginal lands through the supply of water and nutrients. Multi-objective optimization was used to quantify the tradeoffs between net revenue, improvements in water quality and carbon sequestration at the farm scale using this design. Results indicated that there is an initial opportunity where land that is marginally productive for row crops and of marginal value for conservation purposes could be used to grow bioenergy crops such that that water quality and carbon sequestration benefits are obtained.

  3. A framework for the design, implementation, and evaluation of interprofessional education.

    PubMed

    Pardue, Karen T

    2015-01-01

    The growing emphasis on teamwork and care coordination within health care delivery is sparking interest in interprofessional education (IPE) among nursing and health profession faculty. Faculty often lack firsthand IPE experience, which hinders pedagogical reform. This article proposes a theoretically grounded framework for the design, implementation, and evaluation of IPE. Supporting literature and practical advice are interwoven. The proposed framework guides faculty in the successful creation and evaluation of collaborative learning experiences. PMID:25330345

  4. Improved Approach for Utilization of FPGA Technology into DAQ, DSP, and Computing Applications

    SciTech Connect

    Isenhower, Larry Donald

    2009-01-28

    Innovation Partners proposed and successfully demonstrated in this SBIR Phase I grant a software/hardware co-design approach to reduce both the difficulty and time to implement Field Programmable Gate Array (FPGA) solutions to data acquisition and specialized computational applications. FPGAs can require excessive time for programming and require specialized knowledge that will be greatly reduced by the company's solution. Not only are FPGAs ideal for DAQ and embedded solutions, they can also be the best solution to specialized signal processing to replace Digital Signal Processors (DSPs). By allowing FPGA programming to be done in C with the equivalent of a simple compilation, algorithm changes and improvements can be implemented decreasing the life-cycle costs and allow subsitution of new FPGA designs staying above the technological details.

  5. Radiometric Calibration of Mars HiRISE High Resolution Imagery Based on Fpga

    NASA Astrophysics Data System (ADS)

    Hou, Yifan; Geng, Xun; Xing, Shuai; Tang, Yonghe; Xu, Qing

    2016-06-01

    Due to the large data amount of HiRISE imagery, traditional radiometric calibration method is not able to meet the fast processing requirements. To solve this problem, a radiometric calibration system of HiRISE imagery based on field program gate array (FPGA) is designed. The montage gap between two channels caused by gray inconsistency is removed through histogram matching. The calibration system is composed of FPGA and DSP, which makes full use of the parallel processing ability of FPGA and fast computation as well as flexible control characteristic of DSP. Experimental results show that the designed system consumes less hardware resources and the real-time processing ability of radiometric calibration of HiRISE imagery is improved.

  6. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  7. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  8. SAD5 Stereo Correlation Line-Striping in an FPGA

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopoulos, Arin C.

    2011-01-01

    High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms

  9. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    -Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

  10. A Framework for Analyzing Interdisciplinary Tasks: Implications for Student Learning and Curricular Design

    PubMed Central

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra

    2013-01-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives. PMID:23737627

  11. Study, design and integration of an FPGA-based system for the time-of-flight calculation applied to PET equipment

    NASA Astrophysics Data System (ADS)

    Aguilar Talens, D. Albert

    , the initial time measurement results are presented, achieving time resolutions below 100 ps for multiple channels. Once characterized, the system is tested with a breast PET prototype, whose technology detectors are based on Position Sensitive PhotoMultiplier Tubes (PSPMTs), performing TOF measurements for different scenarios. After this point, tests based on two Silicon Photomultipliers (SiPMs) modules were carried out. SiPMs are immune to magnetic fields, among other advantages. This is an important feature since there is a significant interest in combining PET and Magnetic Resonances (MR). Each of the two detector modules used are composed of a single crystal pixel. The electronic conditioning circuits are designed, taking into account the most influential parameters in time resolution. After these results, an array of 144 SiPMs is tested, optimizing several parameters, which directly impact on the system performance. Having demonstrated the system capabilities, an optimization process is devised. On the one hand, TDC measurements are enhanced up to 40 ps of precision. On the other hand, a coincidence algorithm is developed, which is responsible of identifying detector pairs that have registered an event within certain time window. Finally, the Thesis conclusions and the future work are presented, followed by the references. A list of publications and attended congresses are also provided.

  12. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  13. A framework for development of an intelligent system for design and manufacturing of stamping dies

    NASA Astrophysics Data System (ADS)

    Hussein, H. M. A.; Kumar, S.

    2014-07-01

    An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.

  14. OPENCORE NMR: Open-source core modules for implementing an integrated FPGA-based NMR spectrometer

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuyuki

    2008-06-01

    A tool kit for implementing an integrated FPGA-based NMR spectrometer [K. Takeda, A highly integrated FPGA-based nuclear magnetic resonance spectrometer, Rev. Sci. Instrum. 78 (2007) 033103], referred to as the OPENCORE NMR spectrometer, is open to public. The system is composed of an FPGA chip and several peripheral boards for USB communication, direct-digital synthesis (DDS), RF transmission, signal acquisition, etc. Inside the FPGA chip have been implemented a number of digital modules including three pulse programmers, the digital part of DDS, a digital quadrature demodulator, dual digital low-pass filters, and a PC interface. These FPGA core modules are written in VHDL, and their source codes are available on our website. This work aims at providing sufficient information with which one can, given some facility in circuit board manufacturing, reproduce the OPENCORE NMR spectrometer presented here. Also, the users are encouraged to modify the design of spectrometer according to their own specific needs. A home-built NMR spectrometer can serve complementary roles to a sophisticated commercial spectrometer, should one comes across such new ideas that require heavy modification to hardware inside the spectrometer. This work can lower the barrier of building a handmade NMR spectrometer in the laboratory, and promote novel and exciting NMR experiments.

  15. SoMIR framework for designing high-NDBP photonic crystal waveguides.

    PubMed

    Mirjalili, Seyed Mohammad

    2014-06-20

    This work proposes a modularized framework for designing the structure of photonic crystal waveguides (PCWs) and reducing human involvement during the design process. The proposed framework consists of three main modules: parameters module, constraints module, and optimizer module. The first module is responsible for defining the structural parameters of a given PCW. The second module defines various limitations in order to achieve desirable optimum designs. The third module is the optimizer, in which a numerical optimization method is employed to perform optimization. As case studies, two new structures called Ellipse PCW (EPCW) and Hypoellipse PCW (HPCW) with different shape of holes in each row are proposed and optimized by the framework. The calculation results show that the proposed framework is able to successfully optimize the structures of the new EPCW and HPCW. In addition, the results demonstrate the applicability of the proposed framework for optimizing different PCWs. The results of the comparative study show that the optimized EPCW and HPCW provide 18% and 9% significant improvements in normalized delay-bandwidth product (NDBP), respectively, compared to the ring-shape-hole PCW, which has the highest NDBP in the literature. Finally, the simulations of pulse propagation confirm the manufacturing feasibility of both optimized structures.

  16. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  17. An FPGA-based ultrasound imaging system using capacitive micromachined ultrasonic transducers.

    PubMed

    Wong, Lawrence L P; Chen, Albert I; Logan, Andrew S; Yeow, John T W

    2012-07-01

    We report the design and experimental results of a field-programmable gate array (FPGA)-based real-time ultrasound imaging system that uses a 16-element phased-array capacitive micromachined ultrasonic transducer fabricated using a fusion bonding process. The imaging system consists of the transducer, discrete analog components situated on a custom-made circuit board, the FPGA, and a monitor. The FPGA program consists of five functional blocks: a main counter, transmit and receive beamformer, receive signal pre-processing, envelope detection, and display. No dedicated digital signal processor or personal computer is required for the imaging system. An experiment is carried out to obtain the sector B-scan of a 4-wire target. The ultrasound imaging system demonstrates the possibility of an integrated system-in-a-package solution.

  18. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  19. FPGA systems development based on universal controller module

    NASA Astrophysics Data System (ADS)

    Graczyk, Rafał; Pożniak, Krzysztof T.; Romaniuk, Ryszard S.

    2008-01-01

    This paper describes hardware and software concept of Universal Controller Module (UCM), a FPGA/PowerPC based embedded system designed to work as a part of VME system. UCM, on one hand, provides access to the VME crate with various laboratory or industrial interfaces like gigabit optical links, 10/100 Mbit Ethernet, Universal Serial Bus (USB), Controller Area Network (CAN), on the other hand UCM is a well prepared platform for further investigations and development in IP cores field, in functionality expansion by PCI Mezzanine Card (PMC).

  20. A Usability and Accessibility Design and Evaluation Framework for ICT Services

    NASA Astrophysics Data System (ADS)

    Subasi, Özge; Leitner, Michael; Tscheligi, Manfred

    This paper introduces a step by step framework for practitioners for combining accessibility and usability engineering processes. Following the discussions towards the needs of more user centeredness in the design of accessible solutions, there is a need for such a practical framework. In general, accessibility has been considered as a topic dealing with "hard facts". But lately terms like semantic and procedural accessibility have been introduced. In the following pages we propose a first sketch of a framework, which shows how to merge both usability and accessibility evaluation methods in the same process in order to guarantee a unified solution for both hard and soft facts of accessibility. We argue that by enhancing the user centered design process as the ISO DIS 9241-210 (revised DIN ISO 13407) describes it, accessibility and usability issues may be covered in one process.

  1. 10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Coarasa, J. A.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Ozga, W.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Spataru, A. C.; Stieger, B.; Sumorok, K.; Veverka, J.; Wakefield, C. C.; Zejdl, P.

    2013-12-01

    For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.

  2. A framework for contexual design and evaluation of health information technology.

    PubMed

    Kuziemsky, Craig; Kushniruk, Andre

    2015-01-01

    Poor contextual fit is a significant cause of health information technology (HIT) implementation issues. While the need for better fit of HIT and context has been well described there is a shortcoming of approaches for how to do it. While the diversity of the contexts where HIT is used prevents us from designing HIT to fit all contexts, if we had better ways of understanding and modelling contexts we could design and evaluate HIT to better fit contexts of use. This paper addresses the above need by developing a framework consisting of a set of terminology and concepts for modelling contextual structures and behaviours to support HIT design. The framework provides a way of binding contextual considerations to allow us to better model contexts as part of HIT design and evaluation.

  3. Problem-Based Learning in Management Education: A Framework for Designing Context

    ERIC Educational Resources Information Center

    Sherwood, Arthur Lloyd

    2004-01-01

    Problem-based learning has great potential for management education. Placing students in a problem-centered environment may help bridge the gap between theory and practice. One important but underdeveloped issue for problem-based learning is the context design of the problem-solving situation. This article's purpose is to develop a framework for…

  4. A Design Based Research Framework for Implementing a Transnational Mobile and Blended Learning Solution

    ERIC Educational Resources Information Center

    Palalas, Agnieszka; Berezin, Nicole; Gunawardena, Charlotte; Kramer, Gretchen

    2015-01-01

    The article proposes a modified Design-Based Research (DBR) framework which accommodates the various socio-cultural factors that emerged in the longitudinal PA-HELP research study at Central University College (CUC) in Ghana, Africa. A transnational team of stakeholders from Ghana, Canada, and the USA collaborated on the development,…

  5. Using the DSAP Framework to Guide Instructional Design and Technology Integration in BYOD Classrooms

    ERIC Educational Resources Information Center

    Wasko, Christopher W.

    2016-01-01

    The purpose of this study was to determine the suitability of the DSAP Framework to guide instructional design and technology integration for teachers piloting a BYOD (Bring Your Own Device) initiative and to measure the impact the initiative had on the amount and type of technology used in pilot classrooms. Quantitative and qualitative data were…

  6. Designing a Virtual Olympic Games Framework by Using Simulation in Web 2.0 Technologies

    ERIC Educational Resources Information Center

    Stoilescu, Dorian

    2013-01-01

    Instructional simulation had major difficulties in the past for offering limited possibilities in practice and learning. This article proposes a link between instructional simulation and Web 2.0 technologies. More exactly, I present the design of the Virtual Olympic Games Framework (VOGF), as a significant demonstration of how interactivity in…

  7. Designing Energy Supply Chains with the P-graph Framework under Cost Constraints and Sustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  8. Designing Multi-Channel Web Frameworks for Cultural Tourism Applications: The MUSE Case Study.

    ERIC Educational Resources Information Center

    Garzotto, Franca; Salmon, Tullio; Pigozzi, Massimiliano

    A framework for the design of multi-channel (MC) applications in the cultural tourism domain is presented. Several heterogeneous interface devices are supported including location-sensitive mobile units, on-site stationary devices, and personalized CDs that extend the on-site experience beyond the visit time thanks to personal memories gathered…

  9. Beyond a Definition: Toward a Framework for Designing and Specifying Mentoring Models

    ERIC Educational Resources Information Center

    Dawson, Phillip

    2014-01-01

    More than three decades of mentoring research has yet to converge on a unifying definition of mentoring; this is unsurprising given the diversity of relationships classified as mentoring. This article advances beyond a definition toward a common framework for specifying mentoring models. Sixteen design elements were identified from the literature…

  10. Using the Universal Design for Learning Framework to Support Culturally Diverse Learners

    ERIC Educational Resources Information Center

    Chita-Tegmark, Meia; Gravel, Jenna W.; Serpa, Maria de Lourdes B.; Domings, Yvonne; Rose, David H.

    2012-01-01

    This article describes the mechanism through which cultural variability is a source of learning differences. The authors argue that the Universal Design for Learning can be extended to capture the way learning is influenced by cultural variability, and show how the UDL framework might be used to create a curriculum that is responsive to this…

  11. The Role of a Reusable Assessment Framework in Designing Computer-Based Learning Environments.

    ERIC Educational Resources Information Center

    Park, Young; Bauer, Malcolm

    This paper introduces the concept of a reusable assessment framework (RAF). An RAF contains a library of linked assessment design objects that express: (1) specific set of proficiencies (i.e. the knowledge, skills, and abilities of students for a given content or skill area); (2) the types of evidence that can be used to estimate those…

  12. Prospective Secondary Teachers Repositioning by Designing, Implementing and Testing Mathematics Learning Objects: A Conceptual Framework

    ERIC Educational Resources Information Center

    Mgombelo, Joyce R.; Buteau, Chantal

    2009-01-01

    This article describes a conceptual framework developed to illuminate how prospective teachers' learning experiences are shaped by didactic-sensitive activities in departments of mathematics. We draw from the experiences of prospective teachers in the Department of Mathematics at our institution in designing, implementing (i.e. computer…

  13. A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.

    ERIC Educational Resources Information Center

    Sproule, Susan; Archer, Norm

    2000-01-01

    Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…

  14. Developing a Framework for Social Technologies in Learning via Design-Based Research

    ERIC Educational Resources Information Center

    Parmaxi, Antigoni; Zaphiris, Panayiotis

    2015-01-01

    This paper reports on the use of design-based research (DBR) for the development of a framework that grounds the use of social technologies in learning. The paper focuses on three studies which step on the learning theory of constructionism. Constructionism assumes that knowledge is better gained when students find this knowledge for themselves…

  15. A Framework for Analyzing Interdisciplinary Tasks: Implications for Student Learning and Curricular Design

    ERIC Educational Resources Information Center

    Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra

    2013-01-01

    The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary…

  16. A Framework for Designing a Research-Based "Maths Counsellor" Teacher Programme

    ERIC Educational Resources Information Center

    Jankvist, Uffe Thomas; Niss, Mogens

    2015-01-01

    This article addresses one way in which decades of mathematics education research results can inform practice, by offering a framework for designing and implementing an in-service teacher education programme for upper secondary mathematics teachers in Denmark. The programme aims to educate a "task force" of so-called "maths…

  17. Learning by Design: Creating Pedagogical Frameworks for Knowledge Building in the Twenty-First Century

    ERIC Educational Resources Information Center

    Yelland, Nicola; Cope, Bill; Kalantzis, Mary

    2008-01-01

    In this paper we present a new theoretical framework for effective teaching and learning in the twenty-first century. We focus on learning activities that exemplify pedagogy as knowing in action and consider the ways in which this enables a transformation of learning in schools. We provide examples of the ways in which this can be designed and…

  18. A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks

    ERIC Educational Resources Information Center

    Singer, Florence Mihaela; Voica, Cristian

    2013-01-01

    The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…

  19. 78 FR 9633 - Policy Statement on the Scenario Design Framework for Stress Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-11

    ... November 23, 2012, at 77 FR 70124 remains February 15, 2013. FOR FURTHER INFORMATION CONTACT: Tim Clark... Federal Register of November 23, 2012, (77 FR 70124) requesting public comment on a policy statement on... CFR Part 252 RIN 7100-AD-86 Policy Statement on the Scenario Design Framework for Stress...

  20. A KBE-enabled design framework for cost/weight optimization study of aircraft composite structures

    NASA Astrophysics Data System (ADS)

    Wang, H.; La Rocca, G.; van Tooren, M. J. L.

    2014-10-01

    Traditionally, minimum weight is the objective when optimizing airframe structures. This optimization, however, does not consider the manufacturing cost which actually determines the profit of the airframe manufacturer. To this purpose, a design framework has been developed able to perform cost/weight multi-objective optimization of an aircraft component, including large topology variations of the structural configuration. The key element of the proposed framework is a dedicated knowledge based engineering (KBE) application, called multi-model generator, which enables modelling very different product configurations and variants and extract all data required to feed the weight and cost estimation modules, in a fully automated fashion. The weight estimation method developed in this research work uses Finite Element Analysis to calculate the internal stresses of the structural elements and an analytical composite plate sizing method to determine their minimum required thicknesses. The manufacturing cost estimation module was developed on the basis of a cost model available in literature. The capability of the framework was successfully demonstrated by designing and optimizing the composite structure of a business jet rudder. The study case indicates the design framework is able to find the Pareto optimal set for minimum structural weight and manufacturing costin a very quick way. Based on the Pareto set, the rudder manufacturer is in conditions to conduct both internal trade-off studies between minimum weight and minimum cost solutions, as well as to offer the OEM a full set of optimized options to choose, rather than one feasible design.

  1. Designing Computer Learning Environments for Engineering and Computer Science: The Scaffolded Knowledge Integration Framework.

    ERIC Educational Resources Information Center

    Linn, Marcia C.

    1995-01-01

    Describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering: the LISP Knowledge Integration Environment and the spatial reasoning environment. (101 references) (Author/MKR)

  2. An Ontology-Based Framework for Bridging Learning Design and Learning Content

    ERIC Educational Resources Information Center

    Knight, Colin, Gasevic, Dragan; Richards, Griff

    2006-01-01

    The paper describes an ontology-based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly…

  3. Report of the Odyssey FPGA Independent Assessment Team

    NASA Technical Reports Server (NTRS)

    Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)

    2001-01-01

    An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.

  4. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  5. Using Learning Design as a Framework for Supporting the Design and Reuse of OER

    ERIC Educational Resources Information Center

    Conole, Grainne; Weller, Martin

    2008-01-01

    The paper will argue that adopting a learning design methodology may provide a vehicle for enabling better design and reuse of Open Educational Resources (OERs). It will describe a learning design methodology, which is being developed and implemented at the Open University in the UK. The aim is to develop a "pick and mix" learning design toolbox…

  6. Developing a framework for qualitative engineering: Research in design and analysis of complex structural systems

    NASA Technical Reports Server (NTRS)

    Franck, Bruno M.

    1990-01-01

    The research is focused on automating the evaluation of complex structural systems, whether for the design of a new system or the analysis of an existing one, by developing new structural analysis techniques based on qualitative reasoning. The problem is to identify and better understand: (1) the requirements for the automation of design, and (2) the qualitative reasoning associated with the conceptual development of a complex system. The long-term objective is to develop an integrated design-risk assessment environment for the evaluation of complex structural systems. The scope of this short presentation is to describe the design and cognition components of the research. Design has received special attention in cognitive science because it is now identified as a problem solving activity that is different from other information processing tasks (1). Before an attempt can be made to automate design, a thorough understanding of the underlying design theory and methodology is needed, since the design process is, in many cases, multi-disciplinary, complex in size and motivation, and uses various reasoning processes involving different kinds of knowledge in ways which vary from one context to another. The objective is to unify all the various types of knowledge under one framework of cognition. This presentation focuses on the cognitive science framework that we are using to represent the knowledge aspects associated with the human mind's abstraction abilities and how we apply it to the engineering knowledge and engineering reasoning in design.

  7. A vaccine study design selection framework for the postlicensure rapid immunization safety monitoring program.

    PubMed

    Baker, Meghan A; Lieu, Tracy A; Li, Lingling; Hua, Wei; Qiang, Yandong; Kawai, Alison Tse; Fireman, Bruce H; Martin, David B; Nguyen, Michael D

    2015-04-15

    The Postlicensure Rapid Immunization Safety Monitoring Program, the vaccination safety monitoring component of the US Food and Drug Administration's Mini-Sentinel project, is currently the largest cohort in the US general population for vaccine safety surveillance. We developed a study design selection framework to provide a roadmap and description of methods that may be utilized to evaluate potential associations between vaccines and health outcomes of interest in the Postlicensure Rapid Immunization Safety Monitoring Program and other systems using administrative data. The strengths and weaknesses of designs for vaccine safety monitoring, including the cohort design, the case-centered design, the risk interval design, the case-control design, the self-controlled risk interval design, the self-controlled case series method, and the case-crossover design, are described and summarized in tabular form. A structured decision table is provided to aid in planning of future vaccine safety monitoring activities, and the data components comprising the structured decision table are delineated. The study design selection framework provides a starting point for planning vaccine safety evaluations using claims-based data sources.

  8. Crisis crowdsourcing framework: designing strategic configurations of crowdsourcing for the emergency management domain

    USGS Publications Warehouse

    Liu, Sophia B.

    2014-01-01

    Crowdsourcing is not a new practice but it is a concept that has gained significant attention during recent disasters. Drawing from previous work in the crisis informatics, disaster sociology, and computer-supported cooperative work (CSCW) literature, the paper first explains recent conceptualizations of crowdsourcing and how crowdsourcing is a way of leveraging disaster convergence. The CSCW concept of “articulation work” is introduced as an interpretive frame for extracting the salient dimensions of “crisis crowdsourcing.” Then, a series of vignettes are presented to illustrate the evolution of crisis crowdsourcing that spontaneously emerged after the 2010 Haiti earthquake and evolved to more established forms of public engagement during crises. The best practices extracted from the vignettes clarified the efforts to formalize crisis crowdsourcing through the development of innovative interfaces designed to support the articulation work needed to facilitate spontaneous volunteer efforts. Extracting these best practices led to the development of a conceptual framework that unpacks the key dimensions of crisis crowdsourcing. The Crisis Crowdsourcing Framework is a systematic, problem-driven approach to determining the why, who, what, when, where, and how aspects of a crowdsourcing system. The framework also draws attention to the social, technological, organizational, and policy (STOP) interfaces that need to be designed to manage the articulation work involved with reducing the complexity of coordinating across these key dimensions. An example of how to apply the framework to design a crowdsourcing system is offered with with a discussion on the implications for applying this framework as well as the limitations of this framework. Innovation is occurring at the social, technological, organizational, and policy interfaces enabling crowdsourcing to be operationalized and integrated into official products and services.

  9. Alternative Model-Based and Design-Based Frameworks for Inference from Samples to Populations: From Polarization to Integration

    ERIC Educational Resources Information Center

    Sterba, Sonya K.

    2009-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs.…

  10. A reuse-based framework for the design of analog and mixed-signal ICs

    NASA Astrophysics Data System (ADS)

    Castro-Lopez, Rafael; Fernandez, Francisco V.; Rodriguez Vazquez, Angel

    2005-06-01

    Despite the spectacular breakthroughs of the semiconductor industry, the ability to design integrated circuits (ICs) under stringent time-to-market (TTM) requirements is lagging behind integration capacity, so far keeping pace with still valid Moore"s Law. The resulting gap is threatening with slowing down such a phenomenal growth. The design community believes that it is only by means of powerful CAD tools and design methodologies - and, possibly, a design paradigm shift - that this design gap can be bridged. In this sense, reuse-based design is seen as a promising solution, and concepts such as IP Block, Virtual Component, and Design Reuse have become commonplace thanks to the significant advances in the digital arena. Unfortunately, the very nature of analog and mixed-signal (AMS) design has hindered a similar level of consensus and development. This paper presents a framework for the reuse-based design of AMS circuits. The framework is founded on three key elements: (1) a CAD-supported hierarchical design flow that facilitates the incorporation of AMS reusable blocks, reduces the overall design time, and expedites the management of increasing AMS design complexity; (2) a complete, clear definition of the AMS reusable block, structured into three separate facets or views: the behavioral, structural, and layout facets, the two first for top-down electrical synthesis and bottom-up verification, the latter used during bottom-up physical synthesis; (3) the design for reusability set of tools, methods, and guidelines that, relying on intensive parameterization as well as on design knowledge capture and encapsulation, allows to produce fully reusable AMS blocks. A case study and a functional silicon prototype demonstrate the validity of the paper"s proposals.

  11. Zeolite-like metal–organic frameworks (ZMOFs): Design, synthesis, and properties

    DOE PAGESBeta

    Eddaoudi, Mohamed; Sava, Dorina F.; Eubank, Jarrod F.; Adil, Karim; Guillerm, Vincent

    2015-10-24

    This study highlights various design and synthesis approaches toward the construction of ZMOFs, which are metal–organic frameworks (MOFs) with topologies and, in some cases, features akin to traditional inorganic zeolites. The interest in this unique subset of MOFs is correlated with their exceptional characteristics arising from the periodic pore systems and distinctive cage-like cavities, in conjunction with modular intra- and/or extra-framework components, which ultimately allow for tailoring of the pore size, pore shape, and properties towards specific applications.

  12. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2011-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  13. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2012-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  14. An economic decision framework using modeling for improving aquifer remediation design

    SciTech Connect

    James, B.R.; Gwo, J.P.; Toran, L.E.

    1995-11-01

    Reducing cost is a critical challenge facing environmental remediation today. One of the most effective ways of reducing costs is to improve decision-making. This can range from choosing more cost- effective remediation alternatives (for example, determining whether a groundwater contamination plume should be remediated or not) to improving data collection (for example, determining when data collection should stoop). Uncertainty in site conditions presents a major challenge for effective decision-making. We present a framework for increasing the effectiveness of remedial design decision-making at groundwater contamination sites where there is uncertainty in many parameters that affect remediation design. The objective is to provide an easy-to-use economic framework for making remediation decisions. The presented framework is used to 1) select the best remedial design from a suite of possible ones, 2) estimate if additional data collection is cost-effective, and 3) determine the most important parameters to be sampled. The framework is developed by combining elements from Latin-Hypercube simulation of contaminant transport, economic risk-cost-benefit analysis, and Regional Sensitivity Analysis (RSA).

  15. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Tsai; Wang, Chun-Hao; Huang, Ing-Jer; Wong, Weng-Fai

    2011-12-01

    Advances in technology are making it possible to run three-dimensional (3D) graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API), device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC) accelerator using transaction-level modeling (TLM). This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  16. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  17. Designing computer learning environments for engineering and computer science: The scaffolded knowledge integration framework

    NASA Astrophysics Data System (ADS)

    Linn, Marcia C.

    1995-06-01

    Designing effective curricula for complex topics and incorporating technological tools is an evolving process. One important way to foster effective design is to synthesize successful practices. This paper describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering. One course enhancement, the LISP Knowledge Integration Environment, improved learning and resulted in more gender-equitable outcomes. The second course enhancement, the spatial reasoning environment, addressed spatial reasoning in an introductory engineering course. This enhancement minimized the importance of prior knowledge of spatial reasoning and helped students develop a more comprehensive repertoire of spatial reasoning strategies. Taken together, the instructional research programs reinforce the value of the scaffolded knowledge integration framework and suggest directions for future curriculum reformers.

  18. Alternative Model-Based and Design-Based Frameworks for Inference From Samples to Populations: From Polarization to Integration

    PubMed Central

    Sterba, Sonya K.

    2010-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs. analytic) to different kinds of populations (finite vs. infinite). We describe the extent of each framework's implementation in observational psychology research. After clarifying some important limitations of each framework, we describe how these limitations are overcome by a newer hybrid model/design-based inferential framework. This hybrid framework allows both kinds of inference to both kinds of populations, given a random sample. We illustrate implementation of the hybrid framework using the High School and Beyond data set. PMID:20411042

  19. Role-Based Design: "A Contemporary Framework for Innovation and Creativity in Instructional Design"

    ERIC Educational Resources Information Center

    Hokanson, Brad; Miller, Charles

    2009-01-01

    This is the first in a series of four articles presenting a new outlook on the process of instructional design. Along with offering an improvement to current practice, the goal is to stimulate discussion about the role of designers, and more importantly, about the nature of the process of instructional design. The authors present in this article a…

  20. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  1. FHAST: FPGA-Based Acceleration of Bowtie in Hardware.

    PubMed

    Fernandez, Edward B; Villarreal, Jason; Lonardi, Stefano; Najjar, Walid A

    2015-01-01

    While the sequencing capability of modern instruments continues to increase exponentially, the computational problem of mapping short sequenced reads to a reference genome still constitutes a bottleneck in the analysis pipeline. A variety of mapping tools (e.g., Bowtie, BWA) is available for general-purpose computer architectures. These tools can take many hours or even days to deliver mapping results, depending on the number of input reads, the size of the reference genome and the number of allowed mismatches or insertion/deletions, making the mapping problem an ideal candidate for hardware acceleration. In this paper, we present FHAST (FPGA hardware accelerated sequence-matching tool), a drop-in replacement for Bowtie that uses a hardware design based on field programmable gate arrays (FPGA). Our architecture masks memory latency by executing multiple concurrent hardware threads accessing memory simultaneously. FHAST is composed by multiple parallel engines to exploit the parallelism available to us on an FPGA. We have implemented and tested FHAST on the Convey HC-1 and later ported on the Convey HC-2ex, taking advantage of the large memory bandwidth available to these systems and the shared memory image between hardware and software. A preliminary version of FHAST running on the Convey HC-1 achieved up to 70x speedup compared to Bowtie (single-threaded). An improved version of FHAST running on the Convey HC-2ex FPGAs achieved up to 12x fold speed gain compared to Bowtie running eight threads on an eight-core conventional architecture, while maintaining almost identical mapping accuracy. FHAST is a drop-in replacement for Bowtie, so it can be incorporated in any analysis pipeline that uses Bowtie (e.g., TopHat). PMID:26451812

  2. FHAST: FPGA-Based Acceleration of Bowtie in Hardware.

    PubMed

    Fernandez, Edward B; Villarreal, Jason; Lonardi, Stefano; Najjar, Walid A

    2015-01-01

    While the sequencing capability of modern instruments continues to increase exponentially, the computational problem of mapping short sequenced reads to a reference genome still constitutes a bottleneck in the analysis pipeline. A variety of mapping tools (e.g., Bowtie, BWA) is available for general-purpose computer architectures. These tools can take many hours or even days to deliver mapping results, depending on the number of input reads, the size of the reference genome and the number of allowed mismatches or insertion/deletions, making the mapping problem an ideal candidate for hardware acceleration. In this paper, we present FHAST (FPGA hardware accelerated sequence-matching tool), a drop-in replacement for Bowtie that uses a hardware design based on field programmable gate arrays (FPGA). Our architecture masks memory latency by executing multiple concurrent hardware threads accessing memory simultaneously. FHAST is composed by multiple parallel engines to exploit the parallelism available to us on an FPGA. We have implemented and tested FHAST on the Convey HC-1 and later ported on the Convey HC-2ex, taking advantage of the large memory bandwidth available to these systems and the shared memory image between hardware and software. A preliminary version of FHAST running on the Convey HC-1 achieved up to 70x speedup compared to Bowtie (single-threaded). An improved version of FHAST running on the Convey HC-2ex FPGAs achieved up to 12x fold speed gain compared to Bowtie running eight threads on an eight-core conventional architecture, while maintaining almost identical mapping accuracy. FHAST is a drop-in replacement for Bowtie, so it can be incorporated in any analysis pipeline that uses Bowtie (e.g., TopHat).

  3. An Information-Centric Framework for Designing Patient-Centered Medical Decision Aids and Risk Communication

    PubMed Central

    Franklin, Lyndsey; Plaisant, Catherine; Shneiderman, Ben

    2013-01-01

    Risk communication is a major challenge in productive patient-physician communication. Patient decision making responsibilities come with an implicit assumption that patients are sufficiently educated and confident in their abilities to make decisions about their care based on evidence based treatment recommendations. Attempts to improve health literacy in patients by way of graphical decision aids have met with success. Such decision aids typically have been designed for a general population and evaluated based on whether or not users of the decision aid can accurately report the data points in isolation. To classify decision aids, we present an information-centric framework for assessing the content delivered to patients. We provide examples of our framework from a literature survey and suggest ways improvements can be made by considering all dimensions of our framework. PMID:24551350

  4. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  5. RIPOSTE: a framework for improving the design and analysis of laboratory-based research

    PubMed Central

    Masca, Nicholas GD; Hensor, Elizabeth MA; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam KA; Teare, M Dawn

    2015-01-01

    Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results. DOI: http://dx.doi.org/10.7554/eLife.05519.001 PMID:25951517

  6. Towards a European Framework to Monitor Infectious Diseases among Migrant Populations: Design and Applicability

    PubMed Central

    Riccardo, Flavia; Dente, Maria Grazia; Kärki, Tommi; Fabiani, Massimo; Napoli, Christian; Chiarenza, Antonio; Giorgi Rossi, Paolo; Velasco Munoz, Cesar; Noori, Teymur; Declich, Silvia

    2015-01-01

    There are limitations in our capacity to interpret point estimates and trends of infectious diseases occurring among diverse migrant populations living in the European Union/European Economic Area (EU/EEA). The aim of this study was to design a data collection framework that could capture information on factors associated with increased risk to infectious diseases in migrant populations in the EU/EEA. The authors defined factors associated with increased risk according to a multi-dimensional framework and performed a systematic literature review in order to identify whether those factors well reflected the reported risk factors for infectious disease in these populations. Following this, the feasibility of applying this framework to relevant available EU/EEA data sources was assessed. The proposed multidimensional framework is well suited to capture the complexity and concurrence of these risk factors and in principle applicable in the EU/EEA. The authors conclude that adopting a multi-dimensional framework to monitor infectious diseases could favor the disaggregated collection and analysis of migrant health data. PMID:26393623

  7. Design and synthesis of an exceptionally stable and highly porous metal-organic framework

    NASA Astrophysics Data System (ADS)

    Li, Hailian; Eddaoudi, Mohamed; O'Keeffe, M.; Yaghi, O. M.

    1999-11-01

    Open metal-organic frameworks are widely regarded as promising materials for applications in catalysis, separation, gas storage and molecular recognition. Compared to conventionally used microporous inorganic materials such as zeolites, these organic structures have the potential for more flexible rational design, through control of the architecture and functionalization of the pores. So far, the inability of these open frameworks to support permanent porosity and to avoid collapsing in the absence of guest molecules, such as solvents, has hindered further progress in the field. Here we report the synthesis of a metal-organic framework which remains crystalline, as evidenced by X-ray single-crystal analyses, and stable when fully desolvated and when heated up to 300°C. This synthesis is achieved by borrowing ideas from metal carboxylate cluster chemistry, where an organic dicarboxylate linker is used in a reaction that gives supertetrahedron clusters when capped with monocarboxylates. The rigid and divergent character of the added linker allows the articulation of the clusters into a three-dimensional framework resulting in a structure with higher apparent surface area and pore volume than most porous crystalline zeolites. This simple and potentially universal design strategy is currently being pursued in the synthesis of new phases and composites, and for gas-storage applications.

  8. A framework for the Subaru Telescope observation control system based on the command design pattern

    NASA Astrophysics Data System (ADS)

    Jeschke, Eric; Bon, Bruce; Inagaki, Takeshi; Streeper, Sam

    2008-08-01

    Subaru Telescope is developing a second-generation Observation Control System that specifically addresses some of the deficiencies of the current Subaru OCS. One area of concern is better extensibility: the current system uses a custom language for implementing commands with a complex macro processing subsystem written in C. It is laborious to improve the language and awkward for scientists to extend and use standard programming techniques. Our Generation 2 OCS provides a lightweight, object-oriented task framework based on the Command design pattern. The framework provides a base task class that abstracts services for processing status and other common infrastructure activities. Upon this are built and provided a set of "atomic" tasks for telescope and instrument commands. A set of "container" tasks based on common sequential and concurrent command processing paradigms is also included. Since all tasks share the same exact interface, it is straightforward to build up compound tasks by plugging simple tasks into container tasks and container tasks into other containers, and so forth. In this way various advanced astronomical workflows can be readily created, with well controlled behaviors. In addition, since tasks are written in Python, it is easy for astronomers to subclass and extend the standard observatory tasks with their own custom extensions and behaviors, in a high-level, full-featured programming language. In this talk we will provide an overview of the task framework design and present preliminary results on the use of the framework during two separate engineering runs.

  9. Experiences on 64 and 150 FPGA Systems

    SciTech Connect

    Storaasli, Olaf O; Strenski, Dave

    2008-01-01

    Four FPGA systems were evaluated: the Cray XD1 system with 6 FPGAs at ORNL and Cray, the Cray XD1 system with 150 FPGAs at NRL* and the 64 FPGAs on Edinburgh s Maxwell . Their hardware and software architectures, programming tools and performance on scientific applications are discussed. FPGA speedup (over a 2.2 GHz Opteron) of 10X was typical for matrix equation solution, molecular dynamics and weather/climate codes and upto 100X for human genome DNA sequencing. Large genome comparisons requiring 12.5 years for an Opteron took less than 24 hours on NRL s Cray XD1 with 150 Virtex FPGAs for a 7,350X speedup. pipeline so each query and database character are compared in parallel, resulting in a table of scores. Genome Sequencing Results: FPGA timing results (for up to 150 FPGAs) were obtained and compared with up to 150 Opterons for sequences of varying size and complexity (e.g. 4GB openfpga.org human DNA benchmark and 155M human vs. 166M mouse DNA). 1 FPGA: Bacillus_anthracis DNA compare: Genomes

  10. FPGA Sequencer for Radar Altimeter Applications

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew C.; Pollard, Brian D.; Chen, Curtis W.

    2011-01-01

    A sequencer for a radar altimeter provides accurate attitude information for a reliable soft landing of the Mars Science Laboratory (MSL). This is a field-programmable- gate-array (FPGA)-only implementation. A table loaded externally into the FPGA controls timing, processing, and decision structures. Radar is memory-less and does not use previous acquisitions to assist in the current acquisition. All cycles complete in exactly 50 milliseconds, regardless of range or whether a target was found. A RAM (random access memory) within the FPGA holds instructions for up to 15 sets. For each set, timing is run, echoes are processed, and a comparison is made. If a target is seen, more detailed processing is run on that set. If no target is seen, the next set is tried. When all sets have been run, the FPGA terminates and waits for the next 50-millisecond event. This setup simplifies testing and improves reliability. A single vertex chip does the work of an entire assembly. Output products require minor processing to become range and velocity. This technology is the heart of the Terminal Descent Sensor, which is an integral part of the Entry Decent and Landing system for MSL. In addition, it is a strong candidate for manned landings on Mars or the Moon.

  11. Testing Microshutter Arrays Using Commercial FPGA Hardware

    NASA Technical Reports Server (NTRS)

    Rapchun, David

    2008-01-01

    NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.

  12. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  13. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane

  14. A Strategic Approach to Curriculum Design for Information Literacy in Teacher Education--Implementing an Information Literacy Conceptual Framework

    ERIC Educational Resources Information Center

    Klebansky, Anna; Fraser, Sharon P.

    2013-01-01

    This paper details a conceptual framework that situates curriculum design for information literacy and lifelong learning, through a cohesive developmental information literacy based model for learning, at the core of teacher education courses at UTAS. The implementation of the framework facilitates curriculum design that systematically,…

  15. Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R.

    2014-01-01

    In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…

  16. Gamers as Designers: A Framework for Investigating Design in Gaming Affinity Spaces

    ERIC Educational Resources Information Center

    Duncan, Sean C.

    2010-01-01

    This article addresses recent approaches to uncovering and theorizing the design activities that occur in online gaming affinity spaces. Examples are presented of productive d/Discourse present within online forums around three video game series, video games, or game platforms, and key design practices engaged upon by gamers in these spaces. It is…

  17. A conceptual curriculum framework designed to ensure quality student health visitor training in practice.

    PubMed

    Hollinshead, Jayne; Stirling, Linda

    2014-07-01

    This paper describes the challenges faced by a trust in England following the introduction of the Health Visitor Implementation Plan. Two practice education facilitators designed a conceptual curriculum framework to ensure quality student health visitor education in practice. This curriculum complimented the excellent academic course already delivered by the University. A justification is provided for the design of the curriculum framework, including a rationale for the introduction of specific training sessions. Student and practice teacher feedback demonstrate the success of the introduction of this programme to ensure the development of student health visitors fit for practice. The conclusion places emphasis on the importance of continuous evaluation of the training programme to meet the needs of the students and the service.

  18. A conceptual curriculum framework designed to ensure quality student health visitor training in practice.

    PubMed

    Hollinshead, Jayne; Stirling, Linda

    2014-07-01

    This paper describes the challenges faced by a trust in England following the introduction of the Health Visitor Implementation Plan. Two practice education facilitators designed a conceptual curriculum framework to ensure quality student health visitor education in practice. This curriculum complimented the excellent academic course already delivered by the University. A justification is provided for the design of the curriculum framework, including a rationale for the introduction of specific training sessions. Student and practice teacher feedback demonstrate the success of the introduction of this programme to ensure the development of student health visitors fit for practice. The conclusion places emphasis on the importance of continuous evaluation of the training programme to meet the needs of the students and the service. PMID:25167726

  19. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  20. Analysing task design and students' responses to context-based problems through different analytical frameworks

    NASA Astrophysics Data System (ADS)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  1. FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography.

    PubMed

    Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan

    2013-03-01

    Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments' PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application.

  2. FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography

    NASA Astrophysics Data System (ADS)

    Khan, S.; Borsic, A.; Manwaring, Preston; Hartov, Alexander; Halter, Ryan

    2013-04-01

    Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments' PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application.

  3. FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography

    PubMed Central

    Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan

    2014-01-01

    Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments’ PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application. PMID:24729790

  4. A framework design for the mHealth system for self-management promotion.

    PubMed

    Jia, Guifeng; Yang, Pan; Zhou, Jie; Zhang, Hengyi; Lin, Chengyu; Chen, Jin; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    Mobile health (mHealth) technology has been proposed to alleviate the lack of sufficient medical resources for personal healthcare. However, usage difficulties and compliance issues relating to this technology restrict the effect of mHealth system-supported self-management. In this study, an mHealth framework is introduced to overcome these drawbacks and improve the outcome of self-management. We implemented a set of ease of use principles in the mHealth design and employed the quantitative Fogg Behavior Model to enhance users' execution ability. The framework was realized in a prototype design for the mHealth system, which consists of medical apparatuses, mobile applications and a health management server. The system is able to monitor the physiological status in an unconstrained manner with simplified operations, while supervising the healthcare plan. The results suggest that the present framework design is accessible for ordinary users and effective in improving users' execution ability in self-management. PMID:26405941

  5. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment. PMID:27468841

  6. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.

  7. Alternate metal framework designs for the metal ceramic prosthesis to enhance the esthetics

    PubMed Central

    Vernekar, Naina Vilas; Jagadish, Prithviraj Kallahalla; Diwakar, Srinivasan; Nadgir, Ramesh

    2011-01-01

    PURPOSE The objective of the present study was to evaluate the effect of five different metal framework designs on the fracture resistance of the metal-ceramic restorations. MATERIALS AND METHODS For the purpose of this study, the central incisor tooth was prepared, and the metal analogue of it and a master die were fabricated. The counter die with the 0.5 mm clearance was used for fabricating the wax patterns for the metal copings. The metal copings with five different metal framework designs were designed from Group 1 to 5. Group 1 with the metal collar, Group 2, 3, 4 and 5 with 0 mm, 0.5 mm, 1 mm and 1.5 mm cervical metal reduction respectively were fabricated. Total of fifty metal ceramic crown samples were fabricated. The fracture resistance was evaluated with the Universal Testing Machine (Instron model No 1011, UK). The basic data was subjected to statistical analysis by ANOVA and Student's t-test. RESULTS Results revealed that the fracture resistance ranged from 651.2 to 993.6 N/m2. Group 1 showed the maximum and Group 5 showed the least value. CONCLUSION The maximum load required to fracture the test specimens even in the groups without the metal collar was found to be exceeding the occlusal forces. Therefore, the metal frameworks with 0.5 mm and 1 mm short of the finish line are recommended for anterior metal ceramic restoration having adequate fracture resistance. PMID:22053240

  8. An efficient and flexible web services-based multidisciplinary design optimisation framework for complex engineering systems

    NASA Astrophysics Data System (ADS)

    Li, Liansheng; Liu, Jihong

    2012-08-01

    Multidisciplinary design optimisation (MDO) involves multiple disciplines, multiple coupled relationships and multiple processes, which is implemented by different specialists dispersed geographically on heterogeneous platforms with different analysis and optimisation tools. The product design data integration and data sharing among the participants hampers the development and applications of MDO in enterprises seriously. Therefore, a multi-hierarchical integrated product design data model (MH-iPDM) supporting the MDO in the web environment and a web services-based multidisciplinary design optimisation (Web-MDO) framework are proposed in this article. Based on the enabling technologies including web services, ontology, workflow, agent, XML and evidence theory, the proposed framework enables the designers geographically dispersed to work collaboratively in the MDO environment. The ontology-based workflow enables the logical reasoning of MDO to be processed dynamically. The evidence theory-based uncertainty reasoning and analysis supports the quantification, aggregation and analysis of the conflicting epistemic uncertainty from multiple sources, which improves the quality of product. Finally, a proof-of-concept prototype system is developed using J2EE and an example of supersonic business jet is demonstrated to verify the autonomous execution of MDO strategies and the effectiveness of the proposed approach.

  9. FPGA Implementation of Metastability-Based True Random Number Generator

    NASA Astrophysics Data System (ADS)

    Hata, Hisashi; Ichikawa, Shuichi

    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.

  10. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  11. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  12. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  13. Covalent organic frameworks: a materials platform for structural and functional designs

    NASA Astrophysics Data System (ADS)

    Huang, Ning; Wang, Ping; Jiang, Donglin

    2016-10-01

    Covalent organic frameworks (COFs) are a class of crystalline porous polymer that allows the atomically precise integration of organic units into extended structures with periodic skeletons and ordered nanopores. One important feature of COFs is that they are designable; that is, the geometry and dimensions of the building blocks can be controlled to direct the topological evolution of structural periodicity. The diversity of building blocks and covalent linkage topology schemes make COFs an emerging materials platform for structural control and functional design. Indeed, COF architectures offer confined molecular spaces for the interplay of photons, excitons, electrons, holes, ions and guest molecules, thereby exhibiting unique properties and functions. In this Review, we summarize the major progress in the field of COFs and recent achievements in developing new design principles and synthetic strategies. We highlight cutting-edge functional designs and identify fundamental issues that need to be addressed in conjunction with future research directions from chemistry, physics and materials perspectives.

  14. A Markovian state-space framework for integrating flexibility into space system design decisions

    NASA Astrophysics Data System (ADS)

    Lafleur, Jarret M.

    The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of

  15. A framework for designing and analyzing binary decision-making strategies in cellular systems†

    PubMed Central

    Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.

    2015-01-01

    Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552

  16. A framework for evaluating and designing citizen science programs for natural resources monitoring.

    PubMed

    Chase, Sarah K; Levine, Arielle

    2016-06-01

    We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring.

  17. A design of multiagent-based framework for volume image construction and analysis.

    PubMed

    Faheem, Hossam M

    2005-01-01

    This paper describes a design of a multiagent-based system that can be used to manage the acquisition and analysis of ultrasonograph images. The major concept is to design a management framework consisting of multiple intelligent agents to direct the ultrasonograph image acquisition and analysis operations carried out using a high-speed bit-parallel architecture efficiently as well as to allow for the construction of 3D images from 2D ones. Volume image operations need reactivity, autonomy, and intelligence of software. Therefore, agents can play an important role in enhancing the overall operation of medical image analysis. The system suggests a set of image analysis operations including smoothing, noise removal, and enhancing techniques. These operations will be implemented using parallel processing architectures while the management framework will consist of different agent types such as: simple reflex agents, agents that keep track of the world, goal-based agents, and utility-based agents. These agents interact with each other and exchange data among themselves in order to achieve a comprehensive speed in performing the volume image construction operations. Guided with the fact that the agent consists of program and architecture, the system deploys parallel processing architectures to implement the image analysis operations. The system is considered a step towards a complete multiagent-based framework for medical image acquisition and analysis.

  18. A framework for evaluating and designing citizen science programs for natural resources monitoring.

    PubMed

    Chase, Sarah K; Levine, Arielle

    2016-06-01

    We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring. PMID:27111860

  19. Development of a FPGA&DSP-based experimental GNSS receiver platform

    NASA Astrophysics Data System (ADS)

    Hu, Yongkang; Zhang, Qishan; Yang, Dongkai

    2009-12-01

    In order to match the flexible requirements of GNSS receiver design, the solution of developing such a universal platform has grown in importance. In this light, this paper introduces a work for the realization of an experimental FPGA&DSP-based GNSS receiver platform, which is according to the Software Defined Radio (SDR) philosophy. Starting from the justification for the hardware devices selected, this paper firstly describes the whole structure. Then the design of base band channel based on FPGA is presented. This includes the design of an individual base band channel and the design of local code generation. The realization of signal acquisition and tracking in DSP is introduced. Finally a signal tracking experiment is introduced, which justified the platform's availability.

  20. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    SciTech Connect

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  1. Remote monitoring and fault recovery for FPGA-based field controllers of telescope and instruments

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhua; Zhu, Dan; Wang, Jianing

    2012-09-01

    As the increasing size and more and more functions, modern telescopes have widely used the control architecture, i.e. central control unit plus field controller. FPGA-based field controller has the advantages of field programmable, which provide a great convenience for modifying software and hardware of control system. It also gives a good platform for implementation of the new control scheme. Because of multi-controlled nodes and poor working environment in scattered locations, reliability and stability of the field controller should be fully concerned. This paper mainly describes how we use the FPGA-based field controller and Ethernet remote to construct monitoring system with multi-nodes. When failure appearing, the new FPGA chip does self-recovery first in accordance with prerecovery strategies. In case of accident, remote reconstruction for the field controller can be done through network intervention if the chip is not being restored. This paper also introduces the network remote reconstruction solutions of controller, the system structure and transport protocol as well as the implementation methods. The idea of hardware and software design is given based on the FPGA. After actual operation on the large telescopes, desired results have been achieved. The improvement increases system reliability and reduces workload of maintenance, showing good application and popularization.

  2. The P0 feedback control system blurs the line between IOC and FPGA.

    SciTech Connect

    DiMonte, N.; APS Engineering Support Division

    2008-01-01

    The P0 Feedback system is a new design at the Advanced Photon Source (APS) primarily intended to stabilize a single bunch in order to operate at a higher accumulated charge. The algorithm for this project required a high-speed DSP solution for a single channel that would make adjustments on a turn-by-turn basis. A field programmable gate array (FPGA) solution was selected that not only met the requirements of the project but far exceeded them. By using a single FPGA, we were able to adjust up to 324 bunches on two separate channels with a total computational time of {approx} 6 x 10{sup 9} multiply- accumulate operations per second. The IOC is a Coldfire CPU tightly coupled to the FPGA, providing dedicated control and monitoring of the system through EPICS [1] process variables. One of the benefits of this configuration is having a four-channel scope in the FPGA that can be monitored on a continuous basis.

  3. Fine-grained parallelism accelerating for RNA secondary structure prediction with pseudoknots based on FPGA.

    PubMed

    Xia, Fei; Jin, Guoqing

    2014-06-01

    PKNOTS is a most famous benchmark program and has been widely used to predict RNA secondary structure including pseudoknots. It adopts the standard four-dimensional (4D) dynamic programming (DP) method and is the basis of many variants and improved algorithms. Unfortunately, the O(N(6)) computing requirements and complicated data dependency greatly limits the usefulness of PKNOTS package with the explosion in gene database size. In this paper, we present a fine-grained parallel PKNOTS package and prototype system for accelerating RNA folding application based on FPGA chip. We adopted a series of storage optimization strategies to resolve the "Memory Wall" problem. We aggressively exploit parallel computing strategies to improve computational efficiency. We also propose several methods that collectively reduce the storage requirements for FPGA on-chip memory. To the best of our knowledge, our design is the first FPGA implementation for accelerating 4D DP problem for RNA folding application including pseudoknots. The experimental results show a factor of more than 50x average speedup over the PKNOTS-1.08 software running on a PC platform with Intel Core2 Q9400 Quad CPU for input RNA sequences. However, the power consumption of our FPGA accelerator is only about 50% of the general-purpose micro-processors.

  4. A novel FPGA-based bunch purity monitor system at the APS storage ring.

    SciTech Connect

    Norum, W. E.; APS Engineering Support Division

    2008-01-01

    Bunch purity is an important source quality factor for the magnetic resonance experiments at the Advanced Photon Source. Conventional bunch-purity monitors utilizing time-to-amplitude converters are subject to dead time. We present a novel design based on a single field- programmable gate array (FPGA) that continuously processes pulses at the full speed of the detector and front-end electronics. The FPGA provides 7778 single-channel analyzers (six per rf bucket). The starting time and width of each single-channel analyzer window can be set to a resolution of 178 ps. A detector pulse arriving inside the window of a single-channel analyzer is recorded in an associated 32-bit counter. The analyzer makes no contribution to the system dead time. Two channels for each rf bucket count pulses originating from the electrons in the bucket. The other four channels on the early and late side of the bucket provide estimates of the background. A single-chip microcontroller attached to the FPGA acts as an EPICS IOC to make the information in the FPGA available to the EPICS clients.

  5. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    DOE PAGESBeta

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined againstmore » pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.« less

  6. Luminance uniformity compensation for OLED panels based on FPGA

    NASA Astrophysics Data System (ADS)

    Ou, Peng; Yang, Gang; Jiang, Quan; Yu, Jun-Sheng; Wu, Qi-Peng; Shang, Fu-Hai; Yin, Wei; Wang, Jun; Zhong, Jian; Luo, Kai-Jun

    2009-09-01

    Aiming at the problem of luminance uniformity for organic lighting-emitting diode (OLED) panels, a new brightness calculating method based on bilinear interpolation is proposed. The irradiance time of each pixel reaching the same luminance is figured out by Matlab. Adopting the 64×32-pixel, single color and passive matrix OLED panel as adjusting luminance uniformity panel, a new circuit compensating scheme based on FPGA is designed. VHDL is used to make each pixel’s irradiance time in one frame period written in program. The irradiance brightness is controlled by changing its irradiance time, and finally, luminance compensation of the panel is realized. The simulation result indicates that the design is reasonable.

  7. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  8. Design of additive quantum codes via the code-word-stabilized framework

    SciTech Connect

    Kovalev, Alexey A.; Pryadko, Leonid P.; Dumer, Ilya

    2011-12-15

    We consider design of the quantum stabilizer codes via a two-step, low-complexity approach based on the framework of codeword-stabilized (CWS) codes. In this framework, each quantum CWS code can be specified by a graph and a binary code. For codes that can be obtained from a given graph, we give several upper bounds on the distance of a generic (additive or nonadditive) CWS code, and the lower Gilbert-Varshamov bound for the existence of additive CWS codes. We also consider additive cyclic CWS codes and show that these codes correspond to a previously unexplored class of single-generator cyclic stabilizer codes. We present several families of simple stabilizer codes with relatively good parameters.

  9. Molecular docking sites designed for the generation of highly crystalline covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Ascherl, Laura; Sick, Torben; Margraf, Johannes T.; Lapidus, Saul H.; Calik, Mona; Hettstedt, Christina; Karaghiosoff, Konstantin; Döblinger, Markus; Clark, Timothy; Chapman, Karena W.; Auras, Florian; Bein, Thomas

    2016-04-01

    Covalent organic frameworks (COFs) formed by connecting multidentate organic building blocks through covalent bonds provide a platform for designing multifunctional porous materials with atomic precision. As they are promising materials for applications in optoelectronics, they would benefit from a maximum degree of long-range order within the framework, which has remained a major challenge. We have developed a synthetic concept to allow consecutive COF sheets to lock in position during crystal growth, and thus minimize the occurrence of stacking faults and dislocations. Hereby, the three-dimensional conformation of propeller-shaped molecular building units was used to generate well-defined periodic docking sites, which guided the attachment of successive building blocks that, in turn, promoted long-range order during COF formation. This approach enables us to achieve a very high crystallinity for a series of COFs that comprise tri- and tetradentate central building blocks. We expect this strategy to be transferable to a broad range of customized COFs.

  10. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation.

    PubMed

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications.

  11. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  12. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation.

    PubMed

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  13. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation

    PubMed Central

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  14. Metal-organic Frameworks as A Tunable Platform for Designing Functional Molecular Materials

    PubMed Central

    Wang, Cheng; Liu, Demin

    2013-01-01

    Metal-organic frameworks (MOFs), also known as coordination polymers, represent an interesting class of crystalline molecular materials that are synthesized by combining metal-connecting points and bridging ligands. The modular nature of and mild conditions for MOF synthesis have permitted the rational structural design of numerous MOFs and the incorporation of various functionalities via constituent building blocks. The resulting designer MOFs have shown promise for applications in a number of areas, including gas storage/separation, nonlinear optics/ferroelectricity, catalysis, energy conversion/storage, chemical sensing, biomedical imaging, and drug delivery. The structure-property relationships of MOFs can also be readily established by taking advantage of the knowledge of their detailed atomic structures, which enables fine-tuning of their functionalities for desired applications. Through the combination of molecular synthesis and crystal engineering MOFs thus present an unprecedented opportunity for the rational and precise design of functional materials. PMID:23944646

  15. Design framework for a simple robotic ankle evaluation and rehabilitation device.

    PubMed

    Syrseloudis, Christos E; Emiris, Ioannis Z; Maganaris, Constantinos N; Lilas, Theodoros E

    2008-01-01

    This paper juxtaposes simple yet sufficiently general robotic mechanisms for ankle function evaluation, measurement and physiotherapy. For the choice, design and operation of the mechanism, a kinematics model of foot is adopted from biomechanics, based on the hypothesis that foot kinematics are similar to a 2R serial robot. We undertake experiments, using a 3D scanner and an inertial sensor in order to fully specify the design framework by studying a larger sample of healthy subjects. Our experimental analysis confirms and enhances the 2R foot model, and leads us to the choice of the specific mechanism. We compute the required workspace and thus address the issues required for a complete and efficient design. The robot must be capable to perform several multi-axis motions and sustain a significant range of forces and torques. We compare mechanisms based on serial and parallel robots, and choose a parallel tripod with an extra rotation axis for its simplicity, accuracy and generality.

  16. Novel fast multiplier implemented using FPGA

    NASA Astrophysics Data System (ADS)

    Jabłoński, Janusz; Wegrzyn, Marek

    2015-09-01

    In the paper, the solution dedicated for FPGA devices of a synthesis of parallel multiplication systems with the alternative approach, called mutual exclusion, for results of partial products is presented. There are proposed a reducer with the factor 4:2 for parallel multipliers, based on Wallace tree structures, that are dedicated for 4-input and 1-output Look-Up Table (LUT) function generator used in FPGA devices. The elaboration refers to the solution for multiplying using FPGAs the numbers of 4 and 8 bits. However it can be enlarged up to 16 and 32 bits. The proposed solution gives the opportunity to use the probability of conditional significant partial products and faster service - fewer logic levels for special cases of multiplication related to the specific values of the sums of partial product bits.

  17. FPGA implementation of robust Capon beamformer

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Zmuda, Henry; Li, Jian; Du, Lin; Sheplak, Mark

    2012-03-01

    The Capon Beamforming algorithm is an optimal spatial filtering algorithm used in various signal processing applications where excellent interference rejection performance is required, such as Radar and Sonar systems, Smart Antenna systems for wireless communications. Its lack of robustness, however, means that it is vulnerable to array calibration errors and other model errors. To overcome this problem, numerous robust Capon Beamforming algorithms have been proposed, which are much more promising for practical applications. In this paper, an FPGA implementation of a robust Capon Beamforming algorithm is investigated and presented. This realization takes an array output with 4 channels, computes the complex-valued adaptive weight vectors for beamforming with an 18 bit fixed-point representation and runs at a 100 MHz clock on Xilinx V4 FPGA. This work will be applied in our medical imaging project for breast cancer detection.

  18. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design

  19. Using FPGA Devices to Accelerate Biomolecular Simulations

    SciTech Connect

    Alam, Sadaf R; Agarwal, Pratul K; Smith, Melissa C; Vetter, Jeffrey S; Caliga, David E

    2007-03-01

    A field-programmable gate array implementation of the particle-mesh Ewald a molecular dynamics simulation method reduces the microprocessor time-to-solution by a factor of three while using only high-level languages. The application speedup on FPGA devices increases with the problem size. The authors use a performance model to analyze the potential of simulating large-scale biological systems faster than many cluster-based supercomputing platforms.

  20. Climate services for society: origins, institutional arrangements, and design elements for an evaluation framework

    PubMed Central

    Vaughan, Catherine; Dessai, Suraje

    2014-01-01

    Climate services involve the generation, provision, and contextualization of information and knowledge derived from climate research for decision making at all levels of society. These services are mainly targeted at informing adaptation to climate variability and change, widely recognized as an important challenge for sustainable development. This paper reviews the development of climate services, beginning with a historical overview, a short summary of improvements in climate information, and a description of the recent surge of interest in climate service development including, for example, the Global Framework for Climate Services, implemented by the World Meteorological Organization in October 2012. It also reviews institutional arrangements of selected emerging climate services across local, national, regional, and international scales. By synthesizing existing literature, the paper proposes four design elements of a climate services evaluation framework. These design elements include: problem identification and the decision-making context; the characteristics, tailoring, and dissemination of the climate information; the governance and structure of the service, including the process by which it is developed; and the socioeconomic value of the service. The design elements are intended to serve as a guide to organize future work regarding the evaluation of when and whether climate services are more or less successful. The paper concludes by identifying future research questions regarding the institutional arrangements that support climate services and nascent efforts to evaluate them. PMID:25798197

  1. Comparative effectiveness research for the clinician researcher: a framework for making a methodological design choice.

    PubMed

    Williams, Cylie M; Skinner, Elizabeth H; James, Alicia M; Cook, Jill L; McPhail, Steven M; Haines, Terry P

    2016-01-01

    Comparative effectiveness research compares two active forms of treatment or usual care in comparison with usual care with an additional intervention element. These types of study are commonly conducted following a placebo or no active treatment trial. Research designs with a placebo or non-active treatment arm can be challenging for the clinician researcher when conducted within the healthcare environment with patients attending for treatment.A framework for conducting comparative effectiveness research is needed, particularly for interventions for which there are no strong regulatory requirements that must be met prior to their introduction into usual care. We argue for a broader use of comparative effectiveness research to achieve translatable real-world clinical research. These types of research design also affect the rapid uptake of evidence-based clinical practice within the healthcare setting.This framework includes questions to guide the clinician researcher into the most appropriate trial design to measure treatment effect. These questions include consideration given to current treatment provision during usual care, known treatment effectiveness, side effects of treatments, economic impact, and the setting in which the research is being undertaken.

  2. Implementing a Digital Phasemeter in an FPGA

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2008-01-01

    Firmware for implementing a digital phasemeter within a field-programmable gate array (FPGA) has been devised. In the original application of this firmware, the phase that one seeks to measure is the difference between the phases of two nominally-equal-frequency heterodyne signals generated by two interferometers. In that application, zero-crossing detectors convert the heterodyne signals to trains of rectangular pulses, the two pulse trains are fed to a fringe counter (the major part of the phasemeter) controlled by a clock signal having a frequency greater than the heterodyne frequency, and the fringe counter computes a time-averaged estimate of the difference between the phases of the two pulse trains. The firmware also does the following: Causes the FPGA to compute the frequencies of the input signals; Causes the FPGA to implement an Ethernet (or equivalent) transmitter for readout of phase and frequency values; and Provides data for use in diagnosis of communication failures. The readout rate can be set, by programming, to a value between 250 Hz and 1 kHz. Network addresses can be programmed by the user.

  3. FPGA Trigger System to Run Klystrons

    SciTech Connect

    Gray, Darius; /Texas A-M /SLAC

    2010-08-25

    The Klystron Department is in need of a new trigger system to update the laboratory capabilities. The objective of the research is to develop the trigger system using Field Programmable Gate Array (FPGA) technology with a user interface that will allow one to communicate with the FPGA via a Universal Serial Bus (USB). This trigger system will be used for the testing of klystrons. The key materials used consists of the Xilinx Integrated Software Environment (ISE) Foundation, a Programmable Read Only Memory (Prom) XCF04S, a Xilinx Spartan 3E 35S500E FPGA, Xilinx Platform Cable USB II, a Printed Circuit Board (PCB), a 100 MHz oscillator, and an oscilloscope. Key considerations include eight triggers, two of which have variable phase shifting capabilities. Once the project was completed the output signals were able to be manipulated via a Graphical User Interface by varying the delay and width of the signal. This was as planned; however, the ability to vary the phase was not completed. Future work could consist of being able to vary the phase. This project will give the operators in the Klystron Department more flexibility to run various tests.

  4. Guiding the Design of Lessons by Using the MAPLET Framework: Matching Aims, Processes, Learner Expertise and Technologies

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Gosper, Maree

    2014-01-01

    This paper introduces the MAPLET framework that was developed to map and link teaching aims, learning processes, learner expertise and technologies. An experimental study with 65 participants is reported to test the effectiveness of the framework as a guide to the design of lessons embedded within larger units of study. The findings indicate the…

  5. An Application of the Impact Evaluation Process for Designing a Performance Measurement and Evaluation Framework in K-12 Environments

    ERIC Educational Resources Information Center

    Guerra-Lopez, Ingrid; Toker, Sacip

    2012-01-01

    This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…

  6. Control over Catenation in Metal−Organic Frameworks via Rational Design of the Organic Building Block

    SciTech Connect

    Farha, Omar K.; Malliakas, Christos D.; Kanatzidis, Mercouri G.; Hupp, Joseph T.

    2010-02-19

    Metal-organic frameworks (MOFs), a hybrid class of materials comprising inorganic nodes and organic struts, have potential application in many areas due to their high surface areas and uniform pores and channels. One of the key challenges to be overcome in MOF synthesis is the strong propensity for catenation (growth of multiple independent networks within a given crystal), as catenation reduces cavity sizes and diminishes porosity. Here we demonstrate that rational design of organic building blocks, which act as strut-impervious scaffolds, can be exploited to generate highly desired noncatenated materials in a controlled fashion.

  7. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  8. Design, implementation and validation of a novel open framework for agile development of mobile health applications

    PubMed Central

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions. PMID:26329639

  9. Design, implementation and validation of a novel open framework for agile development of mobile health applications.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Garcia, Rafael; Saez, Alejandro; Damas, Miguel; Holgado-Terriza, Juan A; Lee, Sungyong; Pomares, Hector; Rojas, Ignacio

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions. PMID:26329639

  10. Design, implementation and validation of a novel open framework for agile development of mobile health applications.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Garcia, Rafael; Saez, Alejandro; Damas, Miguel; Holgado-Terriza, Juan A; Lee, Sungyong; Pomares, Hector; Rojas, Ignacio

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions.

  11. Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer

    NASA Technical Reports Server (NTRS)

    Jamot, Robert F.; Monroe, Ryan M.

    2012-01-01

    With present concern for ecological sustainability ever increasing, it is desirable to model the composition of Earth s upper atmosphere accurately with regards to certain helpful and harmful chemicals, such as greenhouse gases and ozone. The microwave limb sounder (MLS) is an instrument designed to map the global day-to-day concentrations of key atmospheric constituents continuously. One important component in MLS is the spectrometer, which processes the raw data provided by the receivers into frequency-domain information that cannot only be transmitted more efficiently, but also processed directly once received. The present-generation spectrometer is fully analog. The goal is to include a fully digital spectrometer in the next-generation sensor. In a digital spectrometer, incoming analog data must be converted into a digital format, processed through a Fourier transform, and finally accumulated to reduce the impact of input noise. While the final design will be placed on an application specific integrated circuit (ASIC), the building of these chips is prohibitively expensive. To that end, this design was constructed on a field-programmable gate array (FPGA). A family of state-of-the-art digital Fourier transform spectrometers has been developed, with a combination of high bandwidth and fine resolution. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved analog-to-digital converters (ADCs). This 6-Gsps (gigasample per second) digital representation of the analog signal is then processed through an FPGA-based streaming fast Fourier transform (FFT). Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers.

  12. A taxonomy of apatite frameworks for the crystal chemical design of fuel cell electrolytes

    SciTech Connect

    Pramana, Stevin S.; Klooster, Wim T.; White, Timothy J.

    2008-08-15

    Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Taking the neutral apatite [La{sub 8}Sr{sub 2}][(GeO{sub 4}){sub 6}]O{sub 2} as a prototype electrolyte, this classification scheme correctly predicted that 'excess' oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}, rather than the presumptive tunnel location of [La{sub 9}Sr][(GeO{sub 4}){sub 6}]O{sub 2.5}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed, with potentially superior low-temperature ion conduction, mediated by the introduction of oxygen to the framework reservoir. - Graphical abstract: Apatite framework taxonomy succinctly rationalises the crystallographic modifications of this structural family as a function of chemical composition. Neutron diffraction identified that the excess oxygen in La{sub 9}SrGe{sub 6}O{sub 26.5} is tenanted in the framework as [La{sub 9}Sr][(GeO{sub 4}){sub 5.5}(GeO{sub 5}){sub 0.5}]O{sub 2}. The implication of this approach is that in addition to the three known apatite genera-A{sub 10}(BO{sub 3}){sub 6}X{sub 2}, A{sub 10}(BO{sub 4}){sub 6}X{sub 2}, A{sub 10}(BO{sub 5}){sub 6}X{sub 2}-hybrid electrolytes of the types A{sub 10}(BO{sub 3}/BO{sub 4}/BO{sub 5}){sub 6}X{sub 2} can be designed.

  13. Tuning the Topology and Functionality of Metal–Organic Frameworks by Ligand Design

    SciTech Connect

    Zhao, Dan; Timmons, Daren J; Yuan, Daqiang; Zhou, Hong-Cai

    2011-02-15

    Metal–organic frameworks (MOFs)—highly crystalline hybrid materials that combine metal ions with rigid organic ligands—have emerged as an important class of porous materials. The organic ligands add flexibility and diversity to the chemical structures and functions of these materials. In this Account, we summarize our laboratory’s experience in tuning the topology and functionality of MOFs by ligand design. These investigations have led to new materials with interesting properties. By using a ligand that can adopt different symmetry conformations through free internal bond rotation, we have obtained two MOFs that are supramolecular stereoisomers of each other at different reaction temperatures. In another case, where the dimerized ligands function as a D₃-Piedfort unit spacer, we achieve chiral (10,3)-a networks. In the design of MOF-based materials for hydrogen and methane storage, we focused on increasing the gas affinity of frameworks by using ligands with different geometries to control the pore size and effectively introduce unsaturated metal centers (UMCs) into the framework. Framework interpenetration in PCN-6 (PCN stands for porous coordination network) can lead to higher hydrogen uptake. Because of the proper alignment of the UMCs, PCN-12 holds the record for uptake of hydrogen at 77 K/760 Torr. In the case of methane storage, PCN-14 with anthracene-derived ligand achieves breakthrough storage capacity, at a level 28% higher than the U.S. Department of Energy target. Selective gas adsorption requires a pore size comparable to that of the target gas molecules; therefore, we use bulky ligands and network interpenetration to reduce the pore size. In addition, with the help of an amphiphilic ligand, we were able to use temperature to continuously change pore size in a 2D layer MOF. Adding charge to an organic ligand can also stabilize frameworks. By ionizing the amine group within mesoMOF-1, the resulting electronic repulsion keeps the network from

  14. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  15. Expanding lean thinking to the product and process design and development within the framework of sustainability

    NASA Astrophysics Data System (ADS)

    Sorli, M.; Sopelana, A.; Salgado, M.; Pelaez, G.; Ares, E.

    2012-04-01

    Companies require tools to change towards a new way of developing and producing innovative products to be manufactured considering the economic, social and environmental impact along the product life cycle. Based on translating Lean principles in Product Development (PD) from the design stage and, along the entire product life cycle, it is aimed to address both sustainability and environmental issues. The drivers of sustainable culture within a lean PD have been identified and a baseline for future research on the development of appropriate tools and techniques has been provided. This research provide industry with a framework which balance environmental and sustainable factors with lean principles to be considered and incorporated from the beginning of product design and development covering the entire product lifecycle.

  16. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies

    NASA Astrophysics Data System (ADS)

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-07-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm-2, which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles.

  17. Adaptive change in electrically stimulated muscle: a framework for the design of clinical protocols.

    PubMed

    Salmons, Stanley

    2009-12-01

    Adult mammalian skeletal muscles have a remarkable capacity for adapting to increased use. Although this behavior is familiar from the changes brought about by endurance exercise, it is seen to a much greater extent in the response to long-term neuromuscular stimulation. The associated phenomena include a markedly increased resistance to fatigue, and this is the key to several clinical applications. However, a more rational basis is needed for designing regimes of stimulation that are conducive to an optimal outcome. In this review I examine relevant factors, such as the amount, frequency, and duty cycle of stimulation, the influence of force generation, and the animal model. From these considerations a framework emerges for the design of protocols that yield an overall functional profile appropriate to the application. Three contrasting examples illustrate the issues that need to be addressed clinically. PMID:19902542

  18. Rational design of crystalline supermicroporous covalent organic frameworks with triangular topologies

    PubMed Central

    Dalapati, Sasanka; Addicoat, Matthew; Jin, Shangbin; Sakurai, Tsuneaki; Gao, Jia; Xu, Hong; Irle, Stephan; Seki, Shu; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of highly ordered porous polymers with many potential applications. They are currently designed and synthesized through hexagonal and tetragonal topologies, limiting the access to and exploration of new structures and properties. Here, we report that a triangular topology can be developed for the rational design and synthesis of a new class of COFs. The triangular topology features small pore sizes down to 12 Å, which is among the smallest pores for COFs reported to date, and high π-column densities of up to 0.25 nm−2, which exceeds those of supramolecular columnar π-arrays and other COF materials. These crystalline COFs facilitate π-cloud delocalization and are highly conductive, with a hole mobility that is among the highest reported for COFs and polygraphitic ensembles. PMID:26178865

  19. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  20. FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification

    PubMed Central

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640

  1. An FPGA-based platform for accelerated offline spike sorting.

    PubMed

    Gibson, Sarah; Judy, Jack W; Marković, Dejan

    2013-04-30

    There is a push in electrophysiology experiments to record simultaneously from many channels (upwards of 64) over long time periods (many hours). Given the relatively high sampling rates (10-40 kHz) and resolutions (12-24 bits per sample), these experiments accumulate exorbitantly large amounts of data (e.g., 100 GB per experiment), which can be very time-consuming to process. Here, we present an FPGA-based spike-sorting platform that can increase the speed of offline spike sorting by at least 25 times, effectively reducing the time required to sort data from long experiments from several hours to just a few minutes. We attempted to preserve the flexibility of software by implementing several different algorithms in the design, and by providing user control over parameters such as spike detection thresholds. The results of sorting a published benchmark dataset using this hardware tool are shown to be comparable to those using similar software tools.

  2. Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping

    SciTech Connect

    Ceriani, Marco; Palermo, Gianluca; Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2013-04-29

    We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantage from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.

  3. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  4. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  5. A Digitalized Silicon Microgyroscope Based on Embedded FPGA

    PubMed Central

    Xia, Dunzhu; Yu, Cheng; Wang, Yuliang

    2012-01-01

    This paper presents a novel digital miniaturization method for a prototype silicon micro-gyroscope (SMG) with the symmetrical and decoupled structure. The schematic blocks of the overall system consist of high precision analog front-end interface, high-speed 18-bit analog to digital convertor, a high-performance core Field Programmable Gate Array (FPGA) chip and other peripherals such as high-speed serial ports for transmitting data. In drive mode, the closed-loop drive circuit are implemented by automatic gain control (AGC) loop and software phase-locked loop (SPLL) based on the Coordinated Rotation Digital Computer (CORDIC) algorithm. Meanwhile, the sense demodulation module based on varying step least mean square demodulation (LMSD) are addressed in detail. All kinds of algorithms are simulated by Simulink and DSPbuilder tools, which is in good agreement with the theoretical design. The experimental results have fully demonstrated the stability and flexibility of the system. PMID:23201990

  6. Valuation-Based Framework for Considering Distributed Generation Photovoltaic Tariff Design: Preprint

    SciTech Connect

    Zinaman, O. R.; Darghouth, N. R.

    2015-02-01

    While an export tariff is only one element of a larger regulatory framework for distributed generation, we choose to focus on tariff design because of the significant impact this program design component has on the various flows of value among power sector stakeholders. In that context, this paper is organized into a series of steps that can be taken during the design of a DGPV export tariff design. To that end this paper outlines a holistic, high-level approach to the complex undertaking of DGPV tariff design, the crux of which is an iterative cost-benefit analysis process. We propose a multi-step progression that aims to promote transparent, focused, and informed dialogue on CBA study methodologies and assumptions. When studies are completed, the long-run marginal avoided cost of the DGPV program should be compared against the costs imposed on utilities and non-participating customers, recognizing that these can be defined differently depending on program objectives. The results of this comparison can then be weighed against other program objectives to formulate tariff options. Potential changes to tariff structures can be iteratively fed back into established analytical tools to inform further discussions.

  7. A supermolecular building approach for the design and construction of metal-organic frameworks.

    PubMed

    Guillerm, Vincent; Kim, Dongwook; Eubank, Jarrod F; Luebke, Ryan; Liu, Xinfang; Adil, Karim; Lah, Myoung Soo; Eddaoudi, Mohamed

    2014-08-21

    In this review, we describe two recently implemented conceptual approaches facilitating the design and deliberate construction of metal–organic frameworks (MOFs), namely supermolecular building block (SBB) and supermolecular building layer (SBL) approaches. Our main objective is to offer an appropriate means to assist/aid chemists and material designers alike to rationally construct desired functional MOF materials, made-to-order MOFs. We introduce the concept of net-coded building units (net-cBUs), where precise embedded geometrical information codes uniquely and matchlessly a selected net, as a compelling route for the rational design of MOFs. This concept is based on employing pre-selected 0-periodic metal–organic polyhedra or 2-periodic metal–organic layers, SBBs or SBLs respectively, as a pathway to access the requisite net-cBUs. In this review, inspired by our success with the original rht-MOF, we extrapolated our strategy to other known MOFs via their deconstruction into more elaborate building units (namely polyhedra or layers) to (i) elucidate the unique relationship between edge-transitive polyhedra or layers and minimal edge-transitive 3-periodic nets, and (ii) illustrate the potential of the SBB and SBL approaches as a rational pathway for the design and construction of 3-periodic MOFs. Using this design strategy, we have also identified several new hypothetical MOFs which are synthetically targetable.

  8. A framework for collecting inclusive design data for the UK population.

    PubMed

    Langdon, Pat; Johnson, Daniel; Huppert, Felicia; Clarkson, P John

    2015-01-01

    Successful inclusive product design requires knowledge about the capabilities, needs and aspirations of potential users and should cater for the different scenarios in which people will use products, systems and services. This should include: the individual at home; in the workplace; for businesses, and for products in these contexts. It needs to reflect the development of theory, tools and techniques as research moves on. And it must also to draw in wider psychological, social, and economic considerations in order to gain a more accurate understanding of users' interactions with products and technology. However, recent research suggests that although a number of national disability surveys have been carried out, no such knowledge currently exists as information to support the design of products, systems and services for heterogeneous users. This paper outlines the strategy behind specific inclusive design research that is aimed at creating the foundations for measuring inclusion in product designs. A key outcome of this future research will be specifying and operationalising capability, and psychological, social and economic context measures for inclusive design. This paper proposes a framework for capturing such information, describes an early pilot study, and makes recommendations for better practice.

  9. Framework for Integrating Safety, Operations, Security, and Safeguards in the Design and Operation of Nuclear Facilities

    SciTech Connect

    Darby, John L.; Horak, Karl Emanuel; LaChance, Jeffrey L.; Tolk, Keith Michael; Whitehead, Donnie Wayne

    2007-10-01

    The US is currently on the brink of a nuclear renaissance that will result in near-term construction of new nuclear power plants. In addition, the Department of Energy’s (DOE) ambitious new Global Nuclear Energy Partnership (GNEP) program includes facilities for reprocessing spent nuclear fuel and reactors for transmuting safeguards material. The use of nuclear power and material has inherent safety, security, and safeguards (SSS) concerns that can impact the operation of the facilities. Recent concern over terrorist attacks and nuclear proliferation led to an increased emphasis on security and safeguard issues as well as the more traditional safety emphasis. To meet both domestic and international requirements, nuclear facilities include specific SSS measures that are identified and evaluated through the use of detailed analysis techniques. In the past, these individual assessments have not been integrated, which led to inefficient and costly design and operational requirements. This report provides a framework for a new paradigm where safety, operations, security, and safeguards (SOSS) are integrated into the design and operation of a new facility to decrease cost and increase effectiveness. Although the focus of this framework is on new nuclear facilities, most of the concepts could be applied to any new, high-risk facility.

  10. Laboratory evaluation of dynamic traffic assignment systems: Requirements, framework, and system design

    SciTech Connect

    Miaou, S.-P.; Pillai, R.S.; Summers, M.S.; Rathi, A.K.; Lieu, H.C.

    1997-01-01

    The success of Advanced Traveler Information 5ystems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dynamic Traffic Assignment (DTA) systems are being developed to provide the required timely information. The DTA systems will provide faithful and coherent real-time, pre-trip, and en-route guidance/information which includes routing, mode, and departure time suggestions for use by travelers, ATIS, and ATMS. To ensure the credibility and deployment potential of such DTA systems, an evaluation system supporting all phases of DTA system development has been designed and presented in this paper. This evaluation system is called the DTA System Laboratory (DSL). A major component of the DSL is a ground- truth simulator, the DTA Evaluation System (DES). The DES is envisioned to be a virtual representation of a transportation system in which ATMS and ATIS technologies are deployed. It simulates the driving and decision-making behavior of travelers in response to ATIS and ATMS guidance, information, and control. This paper presents the major evaluation requirements for a DTA Systems, a modular modeling framework for the DES, and a distributed DES design. The modeling framework for the DES is modular, meets the requirements, can be assembled using both legacy and independently developed modules, and can be implemented as a either a single process or a distributed system. The distributed design is extendible, provides for the optimization of distributed performance, and object-oriented design within each distributed component. A status report on the development of the DES and other research applications is also provided.

  11. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  12. A Framework of Working Across Disciplines in Early Design and R&D of Large Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.

    2015-01-01

    This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.

  13. Design of a digital beam attenuation system for computed tomography: Part I. System design and simulation framework

    SciTech Connect

    Szczykutowicz, Timothy P.; Mistretta, Charles A.

    2013-02-15

    Purpose: The purpose of this work is to introduce a new device that allows for patient-specific imaging-dose modulation in conventional and cone-beam CT. The device is called a digital beam attenuator (DBA). The DBA modulates an x-ray beam by varying the attenuation of a set of attenuating wedge filters across the fan angle. The ability to modulate the imaging dose across the fan beam represents another stride in the direction of personalized medicine. With the DBA, imaging dose can be tailored for a given patient anatomy, or even tailored to provide signal-to-noise ratio enhancement within a region of interest. This modulation enables decreases in: dose, scatter, detector dynamic range requirements, and noise nonuniformities. In addition to introducing the DBA, the simulation framework used to study the DBA under different configurations is presented. Finally, a detailed study on the choice of the material used to build the DBA is presented. Methods: To change the attenuator thickness, the authors propose to use an overlapping wedge design. In this design, for each wedge pair, one wedge is held stationary and another wedge is moved over the stationary wedge. The composite thickness of the two wedges changes as a function of the amount of overlap between the wedges. To validate the DBA concept and study design changes, a simulation environment was constructed. The environment allows for changes to system geometry, different source spectra, DBA wedge design modifications, and supports both voxelized and analytic phantom models. A study of all the elements from atomic number 1 to 92 were evaluated for use as DBA filter material. The amount of dynamic range and tube loading for each element were calculated for various DBA designs. Tube loading was calculated by comparing the attenuation of the DBA at its minimum attenuation position to a filtered non-DBA acquisition. Results: The design and parametrization of DBA implemented FFMCT has been introduced. A simulation

  14. Research on acceleration method of reactor physics based on FPGA platforms

    SciTech Connect

    Li, C.; Yu, G.; Wang, K.

    2013-07-01

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)

  15. FPGA-Based Digital Current Switching Power Amplifiers Used in Magnetic Bearing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Zhang, Kai; Dong, Jinping

    For a traditional two-level current switching power amplifier (PA) used in a magnetic bearing system, its current ripple is obvious. To increase its current ripple performance, three-level amplifiers are designed and their current control is generally based on analog and logical circuits. So the required hardware is complex and a performance increase from the hardware adjustment is difficult. To solve this problem, a FPGA-based digital current switching power amplifier (DCSPA) was designed. Its current ripple was obviously smaller than a two-level amplifier and its control circuit was much simpler than a tri-level amplifier with an analog control circuit. Because of the field-programmable capability of a FPGA chip used, different control algorithms including complex nonlinear algorithms could be easily implemented in the amplifier and their effects could be compared with the same hardware.

  16. Advanced image processing package for FPGA-based re-programmable miniature electronics

    NASA Astrophysics Data System (ADS)

    Ovod, Vladimir I.; Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.

    2005-05-01

    Nova Sensors produces miniature electronics for a variety of real-time digital video camera systems, including foveal sensors based on Nova's Variable Acuity Superpixel Imager (VASITM) technology. An advanced image-processing package has been designed at Nova Sensors to re-configure the FPGA-based co-processor board for numerous applications including motion detection, optical, background velocimetry and target tracking. Currently, the processing package consists of 14 processing operations that cover a broad range of point- and area-applied algorithms. Flexible FPGA designs of these operations and re-programmability of the processing board allows for easy updates of the VASITM sensors, and for low-cost customization of VASITM sensors taking into account specific customer requirements. This paper describes the image processing algorithms implemented and verified in Xilinx FPGAs and provides the major technical performances with figures illustrating practical applications of the processing package.

  17. FPGA ROM Code for Very Large FIFO Control

    1995-02-22

    The code is used to program a Field Programmable Gate Array (FPGA) controls a 4 megabit FIFO so that a set delay from input to output is maintained. The FPGA is also capable of inserting errors into the data flow in a controlled manner.

  18. An application of the Impact Evaluation Process for designing a performance measurement and evaluation framework in K-12 environments.

    PubMed

    Guerra-López, Ingrid; Toker, Sacip

    2012-05-01

    This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The framework design process is guided by the Impact Evaluation Process, and included the participation of key stakeholders including administrative and teaching staff who all contributed to the performance measurement and evaluation framework design process. Key performance indicators at the strategic, tactical, and operational levels were derived from the school vision, and linked to specific interventions to facilitate the continuous evaluation and improvement process.

  19. A conceptual framework for the design of environmental post-market monitoring of genetically modified plants.

    PubMed

    Sanvido, Olivier; Widmer, Franco; Winzeler, Michael; Bigler, Franz

    2005-01-01

    Genetically modified plants (GMPs) may soon be cultivated commercially in several member countries of the European Union (EU). According to EU Directive 2001/18/EC, post-market monitoring (PMM) for commercial GMP cultivation must be implemented, in order to detect and prevent adverse effects on human health and the environment. However, no general PMM strategies for GMP cultivation have been established so far. We present a conceptual framework for the design of environmental PMM for GMP cultivation based on current EU legislation and common risk analysis procedures. We have established a comprehensive structure of the GMP approval process, consisting of pre-market risk assessment (PMRA) as well as PMM. Both programs can be distinguished conceptually due to principles inherent to risk analysis procedures. The design of PMM programs should take into account the knowledge gained during approval for commercialization of a specific GMP and the decisions made in the environmental risk assessments (ERAs). PMM is composed of case-specific monitoring (CSM) and general surveillance. CSM focuses on anticipated effects of a specific GMP. Selection of case-specific indicators for detection of ecological exposure and effects, as well as definition of effect sizes, are important for CSM. General surveillance is designed to detect unanticipated effects on general safeguard subjects, such as natural resources, which must not be adversely affected by human activities like GMP cultivation. We have identified clear conceptual differences between CSM and general surveillance, and propose to adopt separate frameworks when developing either of the two programs. Common to both programs is the need to put a value on possible ecological effects of GMP cultivation. The structure of PMM presented here will be of assistance to industry, researchers, and regulators, when assessing GMPs during commercialization.

  20. Single event upset susceptibility testing of the Xilinx Virtex II FPGA

    NASA Technical Reports Server (NTRS)

    Yui, C.; Swift, G.; Carmichael, C.

    2002-01-01

    Heavy ion testing of the Xilinx Virtex IZ was conducted on the configuration, block RAM and user flip flop cells to determine their single event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA is used to reveal L1/e values and single-event-functional interrupt failures.

  1. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  2. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  3. A framework for the design and development of physical employment tests and standards.

    PubMed

    Payne, W; Harvey, J

    2010-07-01

    Because operational tasks in the uniformed services (military, police, fire and emergency services) are physically demanding and incur the risk of injury, employment policy in these services is usually competency based and predicated on objective physical employment standards (PESs) based on physical employment tests (PETs). In this paper, a comprehensive framework for the design of PETs and PESs is presented. Three broad approaches to physical employment testing are described and compared: generic predictive testing; task-related predictive testing; task simulation testing. Techniques for the selection of a set of tests with good coverage of job requirements, including job task analysis, physical demands analysis and correlation analysis, are discussed. Regarding individual PETs, theoretical considerations including measurability, discriminating power, reliability and validity, and practical considerations, including development of protocols, resource requirements, administrative issues and safety, are considered. With regard to the setting of PESs, criterion referencing and norm referencing are discussed. STATEMENT OF RELEVANCE: This paper presents an integrated and coherent framework for the development of PESs and hence provides a much needed theoretically based but practically oriented guide for organisations seeking to establish valid and defensible PESs. PMID:20582767

  4. Multiscale Simulation as a Framework for the Enhanced Design of Nanodiamond-Polyethylenimine-based Gene Delivery

    PubMed Central

    Kim, Hansung; Man, Han Bin; Saha, Biswajit; Kopacz, Adrian M.; Lee, One-Sun; Schatz, George C.; Ho, Dean; Liu, Wing Kam

    2012-01-01

    Nanodiamonds (NDs) are emerging carbon platforms with promise as gene/drug delivery vectors for cancer therapy. Specifically, NDs functionalized with the polymer polyethylenimine (PEI) can transfect small interfering RNAs (siRNA) in vitro with high efficiency and low cytotoxicity. Here we present a modeling framework to accurately guide the design of ND-PEI gene platforms and elucidate binding mechanisms between ND, PEI, and siRNA. This is among the first ND simulations to comprehensively account for ND size, charge distribution, surface functionalization, and graphitization. The simulation results are compared with our experimental results both for PEI loading onto NDs and for siRNA (C-myc) loading onto ND-PEI for various mixing ratios. Remarkably, the model is able to predict loading trends and saturation limits for PEI and siRNA, while confirming the essential role of ND surface functionalization in mediating ND-PEI interactions. These results demonstrate that this robust framework can be a powerful tool in ND platform development, with the capacity to realistically treat other nanoparticle systems. PMID:23304428

  5. Computational Design of Metal-Organic Frameworks with High Methane Deliverable Capacity

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Deem Team; Haranczyk Team; Smit Team

    Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, etc. Intensive efforts are being made to develop new MOFs with desirable properties both experimentally and computationally in the past decades. To guide experimental synthesis with limited throughput, we develop a computational methodology to explore MOFs with high methane deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and evolves a population of MOFs with desirable property efficiently. We identify about 500 MOFs with higher deliverable capacity than MOF-5 in 10 networks. We also investigate the relationship between deliverable capacity and internal surface area of MOFs. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. DE-FG02- 12ER16362.

  6. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    PubMed

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  7. Passive Tomography for Spent Fuel Verification: Analysis Framework and Instrument Design Study

    SciTech Connect

    White, Timothy A.; Svard, Staffan J.; Smith, Leon E.; Mozin, Vladimir V.; Jansson, Peter; Davour, Anna; Grape, Sophie; Trellue, H.; Deshmukh, Nikhil S.; Wittman, Richard S.; Honkamaa, Tapani; Vaccaro, Stefano; Ely, James

    2015-05-18

    The potential for gamma emission tomography (GET) to detect partial defects within a spent nuclear fuel assembly is being assessed through a collaboration of Support Programs to the International Atomic Energy Agency (IAEA). In the first phase of this study, two safeguards verification objectives have been identified. The first is the independent determination of the number of active pins that are present in the assembly, in the absence of a priori information. The second objective is to provide quantitative measures of pin-by-pin properties, e.g. activity of key isotopes or pin attributes such as cooling time and relative burnup, for the detection of anomalies and/or verification of operator-declared data. The efficacy of GET to meet these two verification objectives will be evaluated across a range of fuel types, burnups, and cooling times, and with a target interrogation time of less than 60 minutes. The evaluation of GET viability for safeguards applications is founded on a modelling and analysis framework applied to existing and emerging GET instrument designs. Monte Carlo models of different fuel types are used to produce simulated tomographer responses to large populations of “virtual” fuel assemblies. Instrument response data are processed by a variety of tomographic-reconstruction and image-processing methods, and scoring metrics specific to each of the verification objectives are defined and used to evaluate the performance of the methods. This paper will provide a description of the analysis framework and evaluation metrics, example performance-prediction results, and describe the design of a “universal” GET instrument intended to support the full range of verification scenarios envisioned by the IAEA.

  8. Performance evaluation on FPGA-implemented UWB-IR receiver for in-body to out-of-body communication systems.

    PubMed

    Shimizu, Yuto; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    In order to design an optimized transceiver structure of ultra wideband (UWB) transmission in in-body to out-of-body communications, it is necessary to make the transceiver structure be easily adjustable in order to realize a good communication performance in an experimental environment. For this purpose, we first implement our develop UWB-impulse radio (IR) receiver structure for the in-body to out-of-body communication in a field programmable gate array (FPGA) board, and evaluate the fundamental communication performance of the FPGA-implemented UWB-IR receiver by a biological-equivalent liquid phantom experiment. The FPGA configuration results indicate that our FPGA realization of the UWB-IR receiver has accomplished good communication performance with few FPGA slices. Moreover, the evaluation results in the liquid phantom experiment show that the FPGA-implemented UWB-IR receiver can achieve a bit error rate (BER) of 10(-3) up to a communication distance of 70 mm with ensuring a high data rate of 2 Mbps.

  9. Formulation of a parametric systems design framework for disaster response planning

    NASA Astrophysics Data System (ADS)

    Mma, Stephanie Weiya

    The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development

  10. FPGA-based Elman neural network control system for linear ultrasonic motor.

    PubMed

    Lin, Faa-Jeng; Hung, Ying-Chih

    2009-01-01

    A field-programmable gate array (FPGA)-based Elman neural network (ENN) control system is proposed to control the mover position of a linear ultrasonic motor (LUSM) in this study. First, the structure and operating principle of the LUSM are introduced. Because the dynamic characteristics and motor parameters of the LUSM are nonlinear and time-varying, an ENN control system is designed to achieve precision position control. The network structure and online learning algorithm using delta adaptation law of the ENN are described in detail. Then, a piecewise continuous function is adopted to replace the sigmoid function in the hidden layer of the ENN to facilitate hardware implementation. In addition, an FPGA chip is adopted to implement the developed control algorithm for possible low-cost and high-performance industrial applications. The effectiveness of the proposed control scheme is verified by some experimental results.

  11. A Secure Content Delivery System Based on a Partially Reconfigurable FPGA

    NASA Astrophysics Data System (ADS)

    Hori, Yohei; Yokoyama, Hiroyuki; Sakane, Hirofumi; Toda, Kenji

    We developed a content delivery system using a partially reconfigurable FPGA to securely distribute digital content on the Internet. With partial reconfigurability of a Xilinx Virtex-II Pro FPGA, the system provides an innovative single-chip solution for protecting digital content. In the system, a partial circuit must be downloaded from a server to the client terminal to play content. Content will be played only when the downloaded circuit is correctly combined (=interlocked) with the circuit built in the terminal. Since each circuit has a unique I/O configuration, the downloaded circuit interlocks with the corresponding built-in circuit designed for a particular terminal. Thus, the interface of the circuit itself provides a novel authentication mechanism. This paper describes the detailed architecture of the system and clarify the feasibility and effectiveness of the system. In addition, we discuss a fail-safe mechanism and future work necessary for the practical application of the system.

  12. A Real-Time de novo DNA Sequencing Assembly Platform Based on an FPGA Implementation.

    PubMed

    Hu, Yuanqi; Georgiou, Pantelis

    2016-01-01

    This paper presents an FPGA based DNA comparison platform which can be run concurrently with the sensing phase of DNA sequencing and shortens the overall time needed for de novo DNA assembly. A hybrid overlap searching algorithm is applied which is scalable and can deal with incremental detection of new bases. To handle the incomplete data set which gradually increases during sequencing time, all-against-all comparisons are broken down into successive window-against-window comparison phases and executed using a novel dynamic suffix comparison algorithm combined with a partitioned dynamic programming method. The complete system has been designed to facilitate parallel processing in hardware, which allows real-time comparison and full scalability as well as a decrease in the number of computations required. A base pair comparison rate of 51.2 G/s is achieved when implemented on an FPGA with successful DNA comparison when using data sets from real genomes.

  13. An FPGA-based Doppler Processor for a Spaceborne Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Durden, S. L.; Fischman, M. A.; Johnson, R. A.; Chu, A. J.; Jourdan, M. N.; Tanelli, S.

    2007-01-01

    Measurement of precipitation Doppler velocity by spaceborne radar is complicated by the large velocity of the satellite platform. Even if successive pulses are well correlated, the velocity measurement may be biased if the precipitation target does not uniformly fill the radar footprint. It has been previously shown that the bias in such situations can be reduced if full spectral processing is used. The authors present a processor based on field-programmable gate array (FPGA) technology that can be used for spectral processing of data acquired by future spaceborne precipitation radars. The requirements for and design of the Doppler processor are addressed. Simulation and laboratory test results show that the processor can meet real-time constraints while easily fitting in a single FPGA.

  14. A Real-Time de novo DNA Sequencing Assembly Platform Based on an FPGA Implementation.

    PubMed

    Hu, Yuanqi; Georgiou, Pantelis

    2016-01-01

    This paper presents an FPGA based DNA comparison platform which can be run concurrently with the sensing phase of DNA sequencing and shortens the overall time needed for de novo DNA assembly. A hybrid overlap searching algorithm is applied which is scalable and can deal with incremental detection of new bases. To handle the incomplete data set which gradually increases during sequencing time, all-against-all comparisons are broken down into successive window-against-window comparison phases and executed using a novel dynamic suffix comparison algorithm combined with a partitioned dynamic programming method. The complete system has been designed to facilitate parallel processing in hardware, which allows real-time comparison and full scalability as well as a decrease in the number of computations required. A base pair comparison rate of 51.2 G/s is achieved when implemented on an FPGA with successful DNA comparison when using data sets from real genomes. PMID:27045828

  15. A new cellular nonlinear network emulation on FPGA for EEG signal processing in epilepsy

    NASA Astrophysics Data System (ADS)

    Müller, Jens; Müller, Jan; Tetzlaff, Ronald

    2011-05-01

    For processing of EEG signals, we propose a new architecture for the hardware emulation of discrete-time Cellular Nonlinear Networks (DT-CNN). Our results show the importance of a high computational accuracy in EEG signal prediction that cannot be achieved with existing analogue VLSI circuits. The refined architecture of the processing elements and its resource schedule, the cellular network structure with local couplings, the FPGA-based embedded system containing the DT-CNN, and the data flow in the entire system will be discussed in detail. The proposed DT-CNN design has been implemented and tested on an Xilinx FPGA development platform. The embedded co-processor with a multi-threading kernel is utilised for control and pre-processing tasks and data exchange to the host via Ethernet. The performance of the implemented DT-CNN has been determined for a popular example and compared to that of a conventional computer.

  16. A frame-based domain-specific language for rapid prototyping of FPGA-based software-defined radios

    NASA Astrophysics Data System (ADS)

    Ouedraogo, Ganda Stephane; Gautier, Matthieu; Sentieys, Olivier

    2014-12-01

    The field-programmable gate array (FPGA) technology is expected to play a key role in the development of software-defined radio (SDR) platforms. As this technology evolves, low-level designing methods for prototyping FPGA-based applications did not change throughout the decades. In the outstanding context of SDR, it is important to rapidly implement new waveforms to fulfill such a stringent flexibility paradigm. At the current time, different proposals have defined, through software-based approaches, some efficient methods to prototype SDR waveforms in a processor-based running environment. This paper describes a novel design flow for FPGA-based SDR applications. This flow relies upon high-level synthesis (HLS) principles and leverages the nascent HLS tools. Its entry point is a domain-specific language (DSL) which handles the complexity of programming an FPGA and integrates some SDR features so as to enable automatic waveform control generation from a data frame model. Two waveforms (IEEE 802.15.4 and IEEE 802.11a) have been designed and explored via this new methodology, and the results are highlighted in this paper.

  17. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  18. An FPGA-based method for a reconfigurable and compact scanner controller

    NASA Astrophysics Data System (ADS)

    Thomas, J.; Megherbi, D.; Sliney, P.; Pyburn, D.; Sengupta, S.; Khoury, J.; Woods, C.; Kirstead, J.

    2005-08-01

    An essential part of a LADAR system is the scanner component. The physical scanner and its electrical controller must often be as compact as possible to meet the stringent physical requirements of the system. It is also advantageous to have a reconfigurable electrical scanner controller. This can allow real-time automated dynamic modifications to the scanning characteristics. Via reconfiguration, this can also allow a single scanner controller to be used on multiple physical scanners with different resonant frequencies and reflection angles. The most efficient method to construct a compact scanner with static or dcynamic re-configurability is by using an FPGA-based system. FPGAs are extremely compact, reconfigurable, and can be programmed with very complex algorithms. We show here the design and testing of such an FPGA-based system has been designed and tested. We show here this FPGA-based system is able to drive scanners at arbitrary frequencies with different waveforms and produce appropriate horizontal and vertical syncs of arbitrary pulse width. Several programmable constants are provided to allow re-configurability. Additionally we show how very few essential components are required so the system could potentially be compacted to approximately the size of a cell phone.

  19. A Framework for Design and Evaluation of Internet-Based Distance Learning Courses Phase One--Framework Justification, Design and Evaluation

    ERIC Educational Resources Information Center

    Baker, Russell K.

    2003-01-01

    Defined in its most basic form, distance learning occurs when the student and the instructor are logistically separated. The purpose of this paper is to propose a framework for the development and evaluation of online distance learning courses, based on integrating an adaptation of Tyler's principles within the levels of cognitive learning in…

  20. A Systems Engineering Framework for Design, Construction and Operation of the Next Generation Nuclear Plant

    SciTech Connect

    Edward J. Gorski; Charles V. Park; Finis H. Southworth

    2004-06-01

    Not since the International Space Station has a project of such wide participation been proposed for the United States. Ten countries, the European Union, universities, Department of Energy (DOE) laboratories, and industry will participate in the research and development, design, construction and/or operation of the fourth generation of nuclear power plants with a demonstration reactor to be built at a DOE site and operational by the middle of the next decade. This reactor will be like no other. The Next Generation Nuclear Plant (NGNP) will be passively safe, economical, highly efficient, modular, proliferation resistant, and sustainable. In addition to electrical generation, the NGNP will demonstrate efficient and cost effective generation of hydrogen to support the President’s Hydrogen Initiative. To effectively manage this multi-organizational and technologically complex project, systems engineering techniques and processes will be used extensively to ensure delivery of the final product. The technological and organizational challenges are complex. Research and development activities are required, material standards require development, hydrogen production, storage and infrastructure requirements are not well developed, and the Nuclear Regulatory Commission may further define risk-informed/performance-based approach to licensing. Detailed design and development will be challenged by the vast cultural and institutional differences across the participants. Systems engineering processes must bring the technological and organizational complexity together to ensure successful product delivery. This paper will define the framework for application of systems engineering to this $1.5B - $1.9B project.

  1. The effect of zirconia framework design on the failure of all-ceramic crown under static loading

    PubMed Central

    Taenguthai, Pakamard

    2015-01-01

    PURPOSE This in vitro study aimed to compare the failure load and failure characteristics of two different zirconia framework designs of premolar crowns when subjected to static loading. MATERIALS AND METHODS Two types of zirconia frameworks, conventional 0.5 mm even thickness framework design (EV) and 0.8 mm cutback of full contour crown anatomy design (CB), were made for 10 samples each. The veneer porcelain was added on under polycarbonate shell crown made by vacuum of full contour crown to obtain the same total thickness of the experiment crowns. The crowns were cemented onto the Cobalt-Chromium die. The dies were tilted 45 degrees from the vertical plane to obtain the shear force to the cusp when loading. All crowns were loaded at the lingual incline of the buccal cusp until fracture using a universal testing machine with cross-head speed 0.5 mm/min. The load to fracture values (N) was recorded and statistically analyzed by independent sample t-test. RESULTS The mean and standard deviations of the failure load were 1,170.1 ± 90.9 N for EV design and 1,450.4 ± 175.7 N for CB design. A significant difference in the compressive failure load was found (P<.05). For the failure characteristic, the EV design was found only cohesive failures within veneering porcelain, while the CB design found more failures through the zirconia framework (8 from 10 samples). CONCLUSION There was a significant difference in the failure load between two designs, and the design of the framework influences failure characteristic of zirconia crown. PMID:25932313

  2. Design of a leaching test framework for coal fly ash accounting for environmental conditions.

    PubMed

    Zandi, Mohammad; Russell, Nigel V

    2007-08-01

    Fly ash from coal combustion contains trace elements which, on disposal or utilisation, may leach out, and therefore be a potential environmental hazard. Environmental conditions have a great impact on the mobility of fly ash constituents as well as the physical and chemical properties of the fly ash. Existing standard leaching methods have been shown to be inadequate by not representing possible disposal or utilisation scenarios. These tests are often criticised on the grounds that the results estimated are not reliable as they are not able to be extrapolated to the application scenario. In order to simulate leaching behaviour of fly ash in different environmental conditions and to reduce deviation between measurements in the fields and the laboratories, it is vital to study sensitivity of the fly ash constituents of interest to major factors controlling leachability. pH, liquid-to-solid ratio, leaching time, leachant type and redox potential are parameters affecting stability of elements in the fly ash. Sensitivity of trace elements to pH and liquid to solid ratio (as two major overriding factors) has been examined. Elements have been classified on the basis of their leaching behaviour under different conditions. Results from this study have been used to identify leaching mechanisms. Also the fly ash has been examined under different standard batch leaching tests in order to evaluate and to compare these tests. A Leaching Test Framework has been devised for assessing the stability of trace elements from fly ashes in different environments. This Framework assists in designing more realistic batch leaching tests appropriate to field conditions and can support the development of regulations and protocols for the management and disposal of coal combustion by-products or other solid wastes of environmental concern.

  3. A systematic framework for computer-aided design of engineering rubber formulations

    NASA Astrophysics Data System (ADS)

    Ghosh, Prasenjeet

    This thesis considers the design of engineering rubber formulations, whose unique properties of elasticity and resilience enable diverse applications. Engineering rubber formulations are a complex mixture of different materials called curatives that includes elastomers, fillers, crosslinking agents, accelerators, activators, retarders, anti-oxidants and processing aids, where the amount of curatives must be adjusted for each application. The characterization of the final properties of the rubber in application is complex and depends on the chemical interplay between the different curatives in formulation via vulcanization chemistry. The details of the processing conditions and the thermal, deformational, and chemical environment encountered in application also have a pronounced effect on the performance of the rubber. Consequently, for much of the history of rubber as an engineering material, its recipe formulations have been developed largely by trial-and-error, rather than by a fundamental understanding. A computer-aided, systematic and automated framework for the design of such materials is proposed in this thesis. The framework requires the solution to two sub-problems: (a) the forward problem, which involves prediction of the desired properties when the formulation is known and (b) the inverse problem that requires identification of the appropriate formulation, given the desired target properties. As part of the forward model, the chemistry of accelerated sulfur vulcanization is reviewed that permits integration of the knowledge of the past five decades in the literature to answer some old questions, reconcile some of the contradicting mechanisms and present a holistic description of the governing chemistry. Based on this mechanistic chemistry, a fundamental kinetic model is derived using population balance equations. The model quantitatively describes, for the first time, the different aspects of vulcanization chemistry. Subsequently, a novel three

  4. Novel intelligent real-time position tracking system using FPGA and fuzzy logic.

    PubMed

    Soares dos Santos, Marco P; Ferreira, J A F

    2014-03-01

    The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller.

  5. Feasibility of a portable morphological scene change detection security system for field programmable gate arrays (FPGA)

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Smith, Jeremy S.; Wu, Q. Henry

    2008-04-01

    In this paper, there is an investigation into the possibility of executing a Morphological Scene Change Detection (MSCD) system on a Field Programmable Gate Array (FPGA), which would allow its set up in virtually any location, with its purpose to detect intruders and raise an alarm to call security personal, and a signal to initial a lockdown of the local area. This paper will include how the system was scaled down from the full building multi-computer system, to an FPGA without losing any functionality using Altera's DSP Builder development tool. Also included is the analysis of the different situations which the system would encounter in the field, and their respective alarm triggering levels, these include indoors, outdoors, close-up, distance, high-brightness, low-light, bad weather, etc. The triggering mechanism is a pixel counter and threshold system, and its adaptive design will be included. All the results shown in this paper, will also be verified by MATLAB m-files running on a full desktop PC, to show that the results obtained from the FPGA based system are accurate.

  6. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data. PMID:25376037

  7. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.

  8. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW

  9. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    NASA Astrophysics Data System (ADS)

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-10-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays.

  10. Designed synthesis of double-stage two-dimensional covalent organic frameworks

    PubMed Central

    Chen, Xiong; Addicoat, Matthew; Jin, Enquan; Xu, Hong; Hayashi, Taku; Xu, Fei; Huang, Ning; Irle, Stephan; Jiang, Donglin

    2015-01-01

    Covalent organic frameworks (COFs) are an emerging class of crystalline porous polymers in which organic building blocks are covalently and topologically linked to form extended crystalline polygon structures, constituting a new platform for designing π-electronic porous materials. However, COFs are currently synthesised by a few chemical reactions, limiting the access to and exploration of new structures and properties. The development of new reaction systems that avoid such limitations to expand structural diversity is highly desired. Here we report that COFs can be synthesised via a double-stage connection that polymerises various different building blocks into crystalline polygon architectures, leading to the development of a new type of COFs with enhanced structural complexity and diversity. We show that the double-stage approach not only controls the sequence of building blocks but also allows fine engineering of pore size and shape. This strategy is widely applicable to different polymerisation systems to yield hexagonal, tetragonal and rhombus COFs with predesigned pores and π-arrays. PMID:26456081

  11. Framework design for remote sensing monitoring and data service system of regional river basins

    NASA Astrophysics Data System (ADS)

    Fu, Jun'e.; Lu, Jingxuan; Pang, Zhiguo

    2015-08-01

    Regional river basins, transboundary rivers in particular, are shared water resources among multiple users. The tempo-spatial distribution and utilization potentials of water resources in these river basins have a great influence on the economic layout and the social development of all the interested parties in these basins. However, due to the characteristics of cross borders and multi-users in these regions, especially across border regions, basic data is relatively scarce and inconsistent, which bring difficulties in basin water resources management. Facing the basic data requirements in regional river management, the overall technical framework for remote sensing monitoring and data service system in China's regional river basins was designed in the paper, with a remote sensing driven distributed basin hydrologic model developed and integrated within the frame. This prototype system is able to extract most of the model required land surface data by multi-sources and multi-temporal remote sensing images, to run a distributed basin hydrological simulation model, to carry out various scenario analysis, and to provide data services to decision makers.

  12. Zr-based metal-organic frameworks: design, synthesis, structure, and applications.

    PubMed

    Bai, Yan; Dou, Yibo; Xie, Lin-Hua; Rutledge, William; Li, Jian-Rong; Zhou, Hong-Cai

    2016-04-21

    Among the large family of metal-organic frameworks (MOFs), Zr-based MOFs, which exhibit rich structure types, outstanding stability, intriguing properties and functions, are foreseen as one of the most promising MOF materials for practical applications. Although this specific type of MOF is still in its early stage of development, significant progress has been made in recent years. Herein, advances in Zr-MOFs since 2008 are summarized and reviewed from three aspects: design and synthesis, structure, and applications. Four synthesis strategies implemented in building and/or modifying Zr-MOFs as well as their scale-up preparation under green and industrially feasible conditions are illustrated first. Zr-MOFs with various structural types are then classified and discussed in terms of different Zr-based secondary building units and organic ligands. Finally, applications of Zr-MOFs in catalysis, molecule adsorption and separation, drug delivery, and fluorescence sensing, and as porous carriers are highlighted. Such a review based on a specific type of MOF is expected to provide guidance for the in-depth investigation of MOFs towards practical applications.

  13. TESLA cavity driving with FPGA controller

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Pozniak, Krzysztof; Romaniuk, Ryszard; Simrock, Stefan

    2005-09-01

    The digital control of the TESLA (TeV-Energy Superconducting Linear Accelerator) resonator is presented. The laboratory setup of the CHECHIA cavity in DESY-Hamburg has been driven by the FPGA (Field Programmable Gate Array) technology system. This experiment focused attention to the general recognition of the cavity features and projected control methods. The electrical model of the resonator is taken as a consideration origin. The calibration of the signal channel is considered as a key preparation for an efficient cavity driving. The identification of the resonator parameters is confirmed as a proper approach for the required performance: driving on resonance during filling and field stabilization during flattop time with reasonable power consumption. The feed-forward and feedback modes were applied successfully for the CHECHIA cavity driving. Representative results of experiments are presented for different levels of the cavity field gradient.

  14. Stego on FPGA: an IWT approach.

    PubMed

    Ramalingam, Balakrishnan; Amirtharajan, Rengarajan; Rayappan, John Bosco Balaguru

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA).

  15. Framework for identifying recommended rules and DFM scoring model to improve manufacturability of sub-20nm layout design

    NASA Astrophysics Data System (ADS)

    Pathak, Piyush; Madhavan, Sriram; Malik, Shobhit; Wang, Lynn T.; Capodieci, Luigi

    2012-03-01

    This paper addresses the framework for building critical recommended rules and a methodology for devising scoring models using simulation or silicon data. Recommended rules need to be applied to critical layout configurations (edge or polygon based geometric relations), which can cause yield issues depending on layout context and process variability. Determining of critical recommended rules is the first step for this framework. Based on process specifications and design rule calculations, recommended rules are characterized by evaluating the manufacturability response to improvements in a layout-dependent parameter. This study is applied to critical 20nm recommended rules. In order to enable the scoring of layouts, this paper also discusses a CAD framework involved in supporting use-models for improving the DFM-compliance of a physical design.

  16. FPGA for Power Control of MSL Avionics

    NASA Technical Reports Server (NTRS)

    Wang, Duo; Burke, Gary R.

    2011-01-01

    A PLGT FPGA (Field Programmable Gate Array) is included in the LCC (Load Control Card), GID (Guidance Interface & Drivers), TMC (Telemetry Multiplexer Card), and PFC (Pyro Firing Card) boards of the Mars Science Laboratory (MSL) spacecraft. (PLGT stands for PFC, LCC, GID, and TMC.) It provides the interface between the backside bus and the power drivers on these boards. The LCC drives power switches to switch power loads, and also relays. The GID drives the thrusters and latch valves, as well as having the star-tracker and Sun-sensor interface. The PFC drives pyros, and the TMC receives digital and analog telemetry. The FPGA is implemented both in Xilinx (Spartan 3- 400) and in Actel (RTSX72SU, ASX72S). The Xilinx Spartan 3 part is used for the breadboard, the Actel ASX part is used for the EM (Engineer Module), and the pin-compatible, radiation-hardened RTSX part is used for final EM and flight. The MSL spacecraft uses a FC (Flight Computer) to control power loads, relays, thrusters, latch valves, Sun-sensor, and star-tracker, and to read telemetry such as temperature. Commands are sent over a 1553 bus to the MREU (Multi-Mission System Architecture Platform Remote Engineering Unit). The MREU resends over a remote serial command bus c-bus to the LCC, GID TMC, and PFC. The MREU also sends out telemetry addresses via a remote serial telemetry address bus to the LCC, GID, TMC, and PFC, and the status is returned over the remote serial telemetry data bus.

  17. Active pharmaceutical ingredient (API) production involving continuous processes--a process system engineering (PSE)-assisted design framework.

    PubMed

    Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V

    2012-10-01

    A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical

  18. Three Dialogs: A Framework for the Analysis and Assessment of Twenty-First-Century Literacy Practices, and Its Use in the Context of Game Design within "Gamestar Mechanic"

    ERIC Educational Resources Information Center

    Games, Ivan Alex

    2008-01-01

    This article discusses a framework for the analysis and assessment of twenty-first-century language and literacy practices in game and design-based contexts. It presents the framework in the context of game design within "Gamestar Mechanic", an innovative game-based learning environment where children learn the Discourse of game design. It…

  19. Designing Energy Supply Chains with the P-Graph Framework under Cost Constraints andSustainability Considerations

    EPA Science Inventory

    A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...

  20. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    ERIC Educational Resources Information Center

    Stolk, Machiel J.; De Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-01-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The…

  1. A Design Framework for Enhancing Engagement in Student-Centered Learning: Own It, Learn It, and Share It

    ERIC Educational Resources Information Center

    Lee, Eunbae; Hannafin, Michael J.

    2016-01-01

    Student-centered learning (SCL) identifies students as the owners of their learning. While SCL is increasingly discussed in K-12 and higher education, researchers and practitioners lack current and comprehensive framework to design, develop, and implement SCL. We examine the implications of theory and research-based evidence to inform those who…

  2. Design-Grounded Assessment: A Framework and a Case Study of Web 2.0 Practices in Higher Education

    ERIC Educational Resources Information Center

    Ching, Yu-Hui; Hsu, Yu-Chang

    2011-01-01

    This paper synthesis's three theoretical perspectives, including sociocultural theory, distributed cognition, and situated cognition, into a framework to guide the design and assessment of Web 2.0 practices in higher education. In addition, this paper presents a case study of Web 2.0 practices. Thirty-seven online graduate students participated in…

  3. An FPGA computing demo core for space charge simulation

    SciTech Connect

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.

  4. Cryogenic loss monitors with FPGA TDC signal processing

    SciTech Connect

    Warner, A.; Wu, J.; /Fermilab

    2011-09-01

    Radiation hard helium gas ionization chambers capable of operating in vacuum at temperatures ranging from 5K to 350K have been designed, fabricated and tested and will be used inside the cryostats at Fermilab's Superconducting Radiofrequency beam test facility. The chamber vessels are made of stainless steel and all materials used including seals are known to be radiation hard and suitable for operation at 5K. The chambers are designed to measure radiation up to 30 kRad/hr with sensitivity of approximately 1.9 pA/(Rad/hr). The signal current is measured with a recycling integrator current-to-frequency converter to achieve a required measurement capability for low current and a wide dynamic range. A novel scheme of using an FPGA-based time-to-digital converter (TDC) to measure time intervals between pulses output from the recycling integrator is employed to ensure a fast beam loss response along with a current measurement resolution better than 10-bit. This paper will describe the results obtained and highlight the processing techniques used.

  5. Defining, Designing for, and Measuring "Social Constructivist Digital Literacy" Development in Learners: A Proposed Framework

    ERIC Educational Resources Information Center

    Reynolds, Rebecca

    2016-01-01

    This paper offers a newly conceptualized modular framework for digital literacy that defines this concept as a task-driven "social constructivist digital literacy," comprising 6 practice domains grounded in Constructionism and social constructivism: Create, Manage, Publish, Socialize, Research, Surf. The framework articulates possible…

  6. Firmware-only implementation of Time-to-Digital Converter (TDC) in Field-Programmable Gate Array (FPGA)

    SciTech Connect

    Jinyuan Wu; Zonghan Shi; Irena Y Wang

    2003-11-07

    A Time-to-Digital Converter (TDC) implemented in general purpose field-programmable gate array (FPGA) for the Fermilab CKM experiment will be presented. The TDC uses a delay chain and register array structure to produce lower bits in addition to higher bits from a clock counter. Lacking the direct controls custom chips, the FPGA implementation of the delay chain and register array structure had to address two major problems: (1) the logic elements used for the delay chain and register array structure must be placed and routed by the FPGA compiler in a predictable manner, to assure uniformity of the TDC binning and short-term stability. (2) The delay variation due to temperature and power supply voltage must be compensated for to assure long-term stability. They used the chain structures in the existing FPGAs that the venders designed for general purpose such as carry algorithm or logic expansion to solve the first problem. To compensate for delay variations, they studied several digital compensation strategies that can be implemented in the same FPGA device. Some bench-top test results will also be presented in this document.

  7. A novel real-time resource efficient implementation of Sobel operator-based edge detection on FPGA

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Saini, Anil K.; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2014-12-01

    A new resource efficient FPGA-based hardware architecture for real-time edge detection using Sobel operator for video surveillance applications has been proposed. The choice of Sobel operator is due to its property to counteract the noise sensitivity of the simple gradient operator. FPGA is chosen for this implementation due to its flexibility to provide the possibility to perform algorithmic changes in later stage of the system development and its capability to provide real-time performance, hard to achieve with general purpose processor or digital signal processor, while limiting the extensive design work, time and cost required for application specific integrated circuit. The proposed architecture uses single processing element for both horizontal and vertical gradient computation for Sobel operator and utilised approximately 38% less FPGA resources as compared to standard Sobel edge detection architecture while maintaining real-time frame rates for high definition videos (1920 × 1080 image sizes). The complete system is implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA board.

  8. Design and construction of porous metal-organic frameworks based on flexible BPH pillars

    SciTech Connect

    Hao, Xiang-Rong; Yang, Guang-sheng; Shao, Kui-Zhan; Su, Zhong-Min; Yuan, Gang; Wang, Xin-Long

    2013-02-15

    Three metal-organic frameworks (MOFs), [Co{sub 2}(BPDC){sub 2}(4-BPH){center_dot}3DMF]{sub n} (1), [Cd{sub 2}(BPDC){sub 2}(4-BPH){sub 2}{center_dot}2DMF]{sub n} (2) and [Ni{sub 2}(BDC){sub 2}(3-BPH){sub 2} (H{sub 2}O){center_dot}4DMF]{sub n} (3) (H{sub 2}BPDC=biphenyl-4,4 Prime -dicarboxylic acid, H{sub 2}BDC=terephthalic acid, BPH=bis(pyridinylethylidene)hydrazine and DMF=N,N Prime -dimethylformamide), have been solvothermally synthesized based on the insertion of heterogeneous BPH pillars. Framework 1 has 'single-pillared' MOF-5-like motif with inner cage diameters of up to 18.6 A. Framework 2 has 'double pillared' MOF-5-like motif with cage diameters of 19.2 A while 3 has 'double pillared' 8-connected framework with channel diameters of 11.0 A. Powder X-ray diffraction (PXRD) shows that 3 is a dynamic porous framework. - Graphical abstract: By insertion of flexible BPH pillars based on 'pillaring' strategy, three metal-organic frameworks are obtained showing that the porous frameworks can be constructed in a much greater variety. Highlights: Black-Right-Pointing-Pointer Frameworks 1 and 2 have MOF-5 like motif. Black-Right-Pointing-Pointer The cube-like cages in 1 and 2 are quite large, comparable to the IRMOF-10. Black-Right-Pointing-Pointer Framework 1 is 'single-pillared' mode while 2 is 'double-pillared' mode. Black-Right-Pointing-Pointer PXRD and gas adsorption analysis show that 3 is a dynamic porous framework.

  9. Improved On-Chip Measurement of Delay in an FPGA or ASIC

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Burke, Gary; Sheldon, Douglas

    2007-01-01

    An improved design has been devised for on-chip-circuitry for measuring the delay through a chain of combinational logic elements in a field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC). In the improved design, the delay chain does not include input and output buffers and is not configured as an oscillator. Instead, the delay chain is made part of the signal chain of an on-chip pulse generator. The duration of the pulse is measured on-chip and taken to equal the delay.

  10. Digital Real-Time Multiple Channel Multiple Mode Neutron Flux Estimation on FPGA-based Device

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Barbot, Loïc; Corre, Gwénolé; Woo, Romuald; Destouches, Christophe; Normand, Stéphane

    2016-02-01

    This paper presents a complete custom full-digital instrumentation device that was designed for real-time neutron flux estimation, especially for nuclear reactor in-core measurement using subminiature Fission Chambers (FCs). Entire fully functional small-footprint design (about 1714 LUTs) is implemented on FPGA. It enables real-time acquisition and analysis of multiple channels neutron's flux both in counting mode and Campbelling mode. Experimental results obtained from this brand new device are consistent with simulation results and show good agreement within good uncertainty. This device paves the way for new applications perspectives in real-time nuclear reactor monitoring.

  11. From Human Factors to Human Actors to Human Crafters: A Meta-Design Inspired Participatory Framework for Designing in Use

    ERIC Educational Resources Information Center

    Maceli, Monica Grace

    2012-01-01

    Meta-design theory emphasizes that system designers can never anticipate all future uses of their system at design time, when systems are being developed. Rather, end users shape their environments in response to emerging needs at use time. Meta-design theory suggests that systems should therefore be designed to adapt to future conditions in the…

  12. A User-Centered Framework for Deriving A Conceptual Design From User Experiences: Leveraging Personas and Patterns to Create Usable Designs

    NASA Astrophysics Data System (ADS)

    Javahery, Homa; Deichman, Alexander; Seffah, Ahmed; Taleb, Mohamed

    Patterns are a design tool to capture best practices, tackling problems that occur in different contexts. A user interface (UI) design pattern spans several levels of design abstraction ranging from high-level navigation to low-level idioms detailing a screen layout. One challenge is to combine a set of patterns to create a conceptual design that reflects user experiences. In this chapter, we detail a user-centered design (UCD) framework that exploits the novel idea of using personas and patterns together. Personas are used initially to collect and model user experiences. UI patterns are selected based on personas pecifications; these patterns are then used as building blocks for constructing conceptual designs. Through the use of a case study, we illustrate how personas and patterns can act as complementary techniques in narrowing the gap between two major steps in UCD: capturing users and their experiences, and building an early design based on that information. As a result of lessons learned from the study and by refining our framework, we define a more systematic process called UX-P (User Experiences to Pattern), with a supporting tool. The process introduces intermediate analytical steps and supports designers in creating usable designs.

  13. High-Speed, Multi-Channel Serial ADC LVDS Interface for Xilinx Virtex-5 FPGA

    NASA Technical Reports Server (NTRS)

    Taylor, Gregory H.

    2012-01-01

    Analog-to-digital converters (ADCs) are used in scientific and communications instruments on all spacecraft. As data rates get higher, and as the transition is made from parallel ADC designs to high-speed, serial, low-voltage differential signaling (LVDS) designs, the need will arise to interface these in field programmable gate arrays (FPGAs). As Xilinx has released the radiation-hardened version of the Virtex-5, this will likely be used in future missions. High-speed serial ADCs send data at very high rates. A de-serializer instantiated in the fabric of the FPGA could not keep up with these high data rates. The Virtex-5 contains primitives designed specifically for high-speed, source-synchronous de-serialization, but as supported by Xilinx, can only support bitwidths of 10. Supporting bit-widths of 12 or more requires the use of the primitives in an undocumented configuration, a non-trivial task. A new SystemVerilog design was written that is simpler and uses fewer hardware resources than the reference design described in Xilinx Application Note XAPP866. It has been shown to work in a Xilinx XC5VSX24OT connected to a MAXIM MAX1438 12-bit ADC using a 50-MHz sample clock. The design can be replicated in the FPGA for multiple ADCs (four instantiations were used for a total of 28 channels).

  14. A software engineering perspective on environmental modeling framework design: The object modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  15. Narrative Means to Preventative Ends: A Narrative Engagement Framework for Designing Prevention Interventions

    PubMed Central

    Miller-Day, Michelle; Hecht, Michael L.

    2013-01-01

    This paper describes a Narrative Engagement Framework (NEF) for guiding communication-based prevention efforts. This framework suggests that personal narratives have distinctive capabilities in prevention. The paper discusses the concept of narrative, links narrative to prevention, and discusses the central role of youth in developing narrative interventions. As illustration, the authors describe how the NEF is applied in the keepin’ it REAL adolescent drug prevention curriculum, pose theoretical directions, and offer suggestions for future work in prevention communication. PMID:23980613

  16. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  17. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  18. LoFASM's FPGA-based Digital Acquisition System

    NASA Astrophysics Data System (ADS)

    Dartez, Louis P.; Jenet, F.; Creighton, T. D.; Ford, A. J.; Hicks, B.; Hinojosa, J.; Kassim, N. E.; Price, R. H.; Stovall, K.; Ray, P. S.; Taylor, G. B.

    2014-01-01

    The Low Frequency All Sky Monitor (LoFASM) is a distributed array of dipole antennas that are sensitive to radio frequencies from 10 to 88 MHz. LoFASM consists of antennas and front end electronics that were originally developed for the Long Wavelength Array (LWA) by the U.S. Naval Research Lab, the University of New Mexico, Virginia Tech, and the Jet Propulsion Laboratory. LoFASM, funded by the U.S. Department of Defense, will initially consist of 4 stations, each consisting of 12 dual-polarization dipole antenna stands. The primary science goals of LoFASM will be the detection and study of low-frequency radio transients, a high priority science goal as deemed by the National Research Council's decadal survey. The data acquisition system for the LoFASM antenna array will be using Field Programmable Gate Array (FPGA) technology to implement a real time full Stokes spectrometer and data recorder. This poster presents an overview of the current design and digital architecture of a single station of the LoFASM array as well as the status of the entire project.

  19. Exploring a Framework for Professional Development in Curriculum Innovation: Empowering Teachers for Designing Context-Based Chemistry Education

    NASA Astrophysics Data System (ADS)

    Stolk, Machiel J.; de Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert

    2011-05-01

    Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The framework is based on Galperin's theory of the internalisation of actions and it is operationalised into a professional development programme to empower chemistry teachers for designing new context-based units. The programme consists of the teaching of an educative context-based unit, followed by the designing of an outline of a new context-based unit. Six experienced chemistry teachers participated in the instructional meetings and practical teaching in their respective classrooms. Data were obtained from meetings, classroom discussions, and observations. The findings indicated that teachers became only partially empowered for designing a new context-based chemistry unit. Moreover, the process of professional development leading to teachers' empowerment was not carried out as intended. It is concluded that the elaboration of the framework needs improvement. The implications for a new programme are discussed.

  20. A Theoretical Framework for Serious Game Design: Exploring Pedagogy, Play and Fidelity and Their Implications for the Design Process

    ERIC Educational Resources Information Center

    Rooney, Pauline

    2012-01-01

    It is widely acknowledged that digital games can provide an engaging, motivating and "fun" experience for students. However an entertaining game does not necessarily constitute a meaningful, valuable learning experience. For this reason, experts espouse the importance of underpinning serious games with a sound theoretical framework which…

  1. Design and implementation of an architectural framework for web portals in a ubiquitous pervasive environment.

    PubMed

    Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol

    2009-01-01

    Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal's gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world's largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework.

  2. Design and Implementation of an Architectural Framework for Web Portals in a Ubiquitous Pervasive Environment

    PubMed Central

    Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol

    2009-01-01

    Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal’s gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world’s largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework. PMID:22346693

  3. Stego on FPGA: An IWT Approach

    PubMed Central

    Ramalingam, Balakrishnan

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  4. Design and application of a framework for examining the beliefs and practices of physics teaching assistants

    NASA Astrophysics Data System (ADS)

    Spike, Benjamin T.; Finkelstein, Noah D.

    2016-06-01

    [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] We present a newly validated and refined framework, TA-PIVOT (TA Practices In and Views Of Teaching), for examining how physics TAs talk about and how they engage in physics teaching. This work builds upon and extends prior efforts to characterize instructors' beliefs and practices by examining both domains in parallel. We present the comprehensive framework (developed from a study of 31 total TAs) and demonstrate its utility in analyzing both interviews and classroom video observations for a sample of eight TAs. We also discuss how this framework may be used to examine variation in beliefs and practices, track the development of beliefs over time, and inform TA preparation.

  5. A Framework for the Design of Effective Graphics for Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.

    1992-01-01

    This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.

  6. A novel pipeline based FPGA implementation of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2014-05-01

    To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.

  7. Performance Evaluation of FPGA-Based Biological Applications

    SciTech Connect

    Storaasli, Olaf O; Yu, Weikuan; Strenski, Dave; Maltby, Jim

    2007-01-01

    On the forefront of recent HPC innovations are Field Programmable Gate Arrays (FPGA), which promise to accelerate calculations by one or more orders of magnitude. The performance of two Cray XD1 systems with Virtex-II Pro 50 and Virtex-4 LX160 FPGAs, were evaluated using a computational biological human genome comparisons program. This paper describes scalable, parallel, FPGA-accelerated results for the FASTA application ssearch34, using the Smith-Waterman algorithm for DNA, RNA and protein sequencing contained in the OpenFPGA benchmark suite. Results indicate typical Cray XD1 FPGA speedups of 50x (Virtex-II Pro 50) and 100x (Virtex-4 LX160) compared to a 2.2 GHz Opteron. Similar speedups are expected for the DRC RPU110-L200 modules (Virtex-4 LX200), which fit in an Opteron socket, and selected by Cray for its XT Supercomputers. The FPGA programming challenges, human genome benchmarking, and data verification of results, are discussed.

  8. Novel cascade FPGA accelerator for support vector machines classification.

    PubMed

    Papadonikolakis, Markos; Bouganis, Christos-Savvas

    2012-07-01

    Support vector machines (SVMs) are a powerful machine learning tool, providing state-of-the-art accuracy to many classification problems. However, SVM classification is a computationally complex task, suffering from linear dependencies on the number of the support vectors and the problem's dimensionality. This paper presents a fully scalable field programmable gate array (FPGA) architecture for the acceleration of SVM classification, which exploits the device heterogeneity and the dynamic range diversities among the dataset attributes. An adaptive and fully-customized processing unit is proposed, which utilizes the available heterogeneous resources of a modern FPGA device in efficient way with respect to the problem's characteristics. The implementation results demonstrate the efficiency of the heterogeneous architecture, presenting a speed-up factor of 2-3 orders of magnitude, compared to the CPU implementation. The proposed architecture outperforms other proposed FPGA and graphic processor unit approaches by more than seven times. Furthermore, based on the special properties of the heterogeneous architecture, this paper introduces the first FPGA-oriented cascade SVM classifier scheme, which exploits the FPGA reconfigurability and intensifies the custom-arithmetic properties of the heterogeneous architecture. The results show that the proposed cascade scheme is able to increase the heterogeneous classifier throughput even further, without introducing any penalty on the resource utilization.

  9. An embedded laser marking controller based on ARM and FPGA processors.

    PubMed

    Dongyun, Wang; Xinpiao, Ye

    2014-01-01

    Laser marking is an important branch of the laser information processing technology. The existing laser marking machine based on PC and WINDOWS operating system, are large and inconvenient to move. Still, it cannot work outdoors or in other harsh environments. In order to compensate for the above mentioned disadvantages, this paper proposed an embedded laser marking controller based on ARM and FPGA processors. Based on the principle of laser galvanometer scanning marking, the hardware and software were designed for the application. Experiments showed that this new embedded laser marking controller controls the galvanometers synchronously and could achieve precise marking.

  10. FPGA-based compression of streaming x-ray photon correlation spectroscopy data

    SciTech Connect

    Madden, Timothy; Jemian, Peter; Narayanan, Surcsh; Sandy, Alec; Sikorski, Marcin; Sprung, Michael; Weizeorick, John

    2011-08-09

    A data acquisition system to perform real-time background subtraction and lower-level-discrimination-based compression of streaming x-ray photon correlation spectroscopy (XPCS) data from a fast charge-coupled device (CCD) area detector has been built and put into service at the Advanced Photon source (APS) at Argonne National Laboratory. A commercial frame grabber with on-board field-programmable gate array (FPGA) was used in the design, and continuously processes 60 frames per second each consisting of 1,024 x 1,024 pixels with up to 64512 photon hits per frame.

  11. Overview and future developments of the FPGA-based DAQ of COMPASS

    NASA Astrophysics Data System (ADS)

    Bai, Y.; Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Steffen, D.; Virius, M.

    2016-02-01

    COMPASS is a fixed-target experiment at the SPS at CERN dedicated to the study of hadron structure and spectroscopy. Since 2014, a hardware event builder consisting of nine custom designed FPGA-cards replaced the previous online computers increasing compactness and scalability of the DAQ. By buffering data, the system exploits the spill structure of the SPS and averages the maximum on-spill data rate over the whole SPS cycle. From 2016, a crosspoint switch connecting all involved high-speed links shall provide a fully programmable system topology and thus simplifies the compensation for hardware failure and improves load balancing.

  12. Single event upset suspectibility testing of the Xilinx Virtex II FPGA

    NASA Technical Reports Server (NTRS)

    Carmichael, C.; Swift, C.; Yui, G.

    2002-01-01

    Heavy ion testing of the Xilinx Virtex II was conducted on the configuration, block RAM and user flip flop cells to determine their static single-event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA was used to reveal L1/e, values (the LET at which the cross section is l/e times the saturation cross-section) and single-event functional-interrupt failures.

  13. An Embedded Laser Marking Controller Based on ARM and FPGA Processors

    PubMed Central

    Dongyun, Wang; Xinpiao, Ye

    2014-01-01

    Laser marking is an important branch of the laser information processing technology. The existing laser marking machine based on PC and WINDOWS operating system, are large and inconvenient to move. Still, it cannot work outdoors or in other harsh environments. In order to compensate for the above mentioned disadvantages, this paper proposed an embedded laser marking controller based on ARM and FPGA processors. Based on the principle of laser galvanometer scanning marking, the hardware and software were designed for the application. Experiments showed that this new embedded laser marking controller controls the galvanometers synchronously and could achieve precise marking. PMID:24772028

  14. An embedded laser marking controller based on ARM and FPGA processors.

    PubMed

    Dongyun, Wang; Xinpiao, Ye

    2014-01-01

    Laser marking is an important branch of the laser information processing technology. The existing laser marking machine based on PC and WINDOWS operating system, are large and inconvenient to move. Still, it cannot work outdoors or in other harsh environments. In order to compensate for the above mentioned disadvantages, this paper proposed an embedded laser marking controller based on ARM and FPGA processors. Based on the principle of laser galvanometer scanning marking, the hardware and software were designed for the application. Experiments showed that this new embedded laser marking controller controls the galvanometers synchronously and could achieve precise marking. PMID:24772028

  15. Design of a pseudo-log image transform IP in an HLS-based memory management framework

    NASA Astrophysics Data System (ADS)

    Butt, Shahzad Ahmad; Mancini, Stéphane; Rousseau, Frédéric; Lavagno, Luciano

    2013-02-01

    The pseudo-log image transform is essentially a logarithmic transformation that simulates the distribution of the eye's photoreceptors and finds application in many important areas of real time image and video processing such as motion detection and estimation in robots and foveated space variant cameras. It belongs to a family of non-linear image processing kernels in which references made to memory are non-linear functions of loop indices. Non-linear kernels need some form of memory management in order to achieve the required throughput, to minimize on-chip memory and to maximize possible data re-use. In this paper we present the design of a pseudo-log image processing hardware accelerator IP, integrated with different interpolation filtering techniques, using a memory management framework. The framework can automatically generate a memory hierarchy around the IP and a data transfer controller that facilitates data exchange with main memory. The memory hierarchy reduces on-chip memory requirements, optimizes throughput and increases data-reuse. The design of the IP is fully performed at the algorithmic level in C/C++. The algorithmic description is profiled within the framework to create a customized memory hierarchy, also described at the synthesizable algorithmic level. Finally, high level synthesis is used to perform hardware design space exploration and performance estimation. Experiments show that the generated memory hierarchy is able to feed the IP with a very high bandwidth even in presence of long external memory latencies.

  16. Wicked ID: Conceptual Framework for Considering Instructional Design as a Wicked Problem

    ERIC Educational Resources Information Center

    Becker, Katrin

    2007-01-01

    The process of instructional design has parallels in other design disciplines. Software design is one that has experienced intense attention in the last 30 or so years, and many lessons learned there can be applied to ID. Using software design as a springboard, this concept paper seeks to propose a new approach to ID. It suggests that…

  17. Formal Learning Sequences and Progression in the Studio: A Framework for Digital Design Education

    ERIC Educational Resources Information Center

    Wärnestål, Pontus

    2016-01-01

    This paper examines how to leverage the design studio learning environment throughout long-term Digital Design education in order to support students to progress from tactical, well-defined, device-centric routine design, to confidently design sustainable solutions for strategic, complex, problems for a wide range of devices and platforms in the…

  18. A critically appraised topic review of computer-aided design/computer-aided machining of removable partial denture frameworks.

    PubMed

    Lang, Lisa A; Tulunoglu, Ibrahim

    2014-01-01

    A critically appraised topic (CAT) review is presented about the use of computer-aided design (CAD)/computer-aided machining (CAM) removable partial denture (RPD) frameworks. A systematic search of the literature supporting CAD/CAM RPD systems revealed no randomized clinical trials, hence the CAT review was performed. A PubMed search yielded 9 articles meeting the inclusion criteria. Each article was characterized by study design and level of evidence. No clinical outcomes research has been published on the use of CAD/CAM RPDs. Low levels of evidence were found in the available literature. Clinical research studies are needed to determine the efficacy of this treatment modality.

  19. Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework

    ERIC Educational Resources Information Center

    Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi

    2012-01-01

    In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and international organisations.…

  20. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  1. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  2. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  3. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  4. 50 CFR 86.102 - How did the Service design the National Framework?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... INFRASTRUCTURE GRANT (BIG) PROGRAM Service Completion of the National Framework § 86.102 How did the Service... data set will fulfill informational needs for you to develop your State program plans as called for in... facility and site managers. (1) The nontrailerable boat data set will fulfill the informational needs...

  5. Towards a Framework for Attention Cueing in Instructional Animations: Guidelines for Research and Design

    ERIC Educational Resources Information Center

    de Koning, Bjorn B.; Tabbers, Huib K.; Rikers, Remy M. J. P.; Paas, Fred

    2009-01-01

    This paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on Mayer's theory of multimedia learning, a framework was…

  6. Designing a Moderation System. Developing a Qualifications Framework for New Zealand.

    ERIC Educational Resources Information Center

    New Zealand Qualifications Authority, Wellington.

    A "moderation system" is a system intended to help ensure uniform interpretation and application of standards within New Zealand's National Qualifications Framework, which consists of all nationally registered academic and vocational qualifications and the nationally registered unit standards from which they are derived. The process of designing…

  7. National Ecosystem Services Classification System (NESCS): Framework Design and Policy Application

    EPA Science Inventory

    Understanding the ways in which ecosystems provide flows of “services” to humans is critical for decision making in many contexts; however, relationships between natural and human systems are complex. A well-defined framework for classifying ecosystem services is essential for sy...

  8. A Systematic Framework of Virtual Laboratories Using Mobile Agent and Design Pattern Technologies

    ERIC Educational Resources Information Center

    Li, Yi-Hsung; Dow, Chyi-Ren; Lin, Cheng-Min; Chen, Sheng-Chang; Hsu, Fu-Wei

    2009-01-01

    Innovations in network and information technology have transformed traditional classroom lectures into new approaches that have given universities the opportunity to create a virtual laboratory. However, there is no systematic framework in existing approaches for the development of virtual laboratories. Further, developing a virtual laboratory…

  9. The Importance of Theoretical Frameworks and Mathematical Constructs in Designing Digital Tools

    ERIC Educational Resources Information Center

    Trinter, Christine

    2016-01-01

    The increase in availability of educational technologies over the past few decades has not only led to new practice in teaching mathematics but also to new perspectives in research, methodologies, and theoretical frameworks within mathematics education. Hence, the amalgamation of theoretical and pragmatic considerations in digital tool design…

  10. Using the 4MAT Framework to Design a Problem-Based Learning Biostatistics Course

    ERIC Educational Resources Information Center

    Nowacki, Amy S.

    2011-01-01

    The study presents and applies the 4MAT theoretical framework to educational planning to transform a biostatistics course into a problem-based learning experience. Using a four-question approach, described are specific activities/materials utilized at both the class and course levels. Two web-based instruments collected data regarding student…

  11. 78 FR 71435 - Policy Statement on the Scenario Design Framework for Stress Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... Consolidated Assets, 77 FR 29458 (May 17, 2012), available at http://www.federalreserve.gov/bankinforeg... of the Basel II Advanced Capital Framework, 73 FR 44620 (July 31, 2008); The Supervisory Capital..., 76 FR 74631 (Dec. 1, 2011) (codified at 12 CFR 225.8). In the wake of the financial crisis,...

  12. Design and Application of a Framework for Examining the Beliefs and Practices of Physics Teaching Assistants

    ERIC Educational Resources Information Center

    Spike, Benjamin T.; Finkelstein, Noah D.

    2016-01-01

    We present a newly validated and refined framework, TA-PIVOT (TA Practices In and Views Of Teaching), for examining how physics TAs talk about and how they engage in physics teaching. This work builds upon and extends prior efforts to characterize instructors' beliefs and practices by examining both domains in parallel. We present the…

  13. Meta-Design as a Pedagogical Framework for Encouraging Student Agency and Democratizing the Classroom

    ERIC Educational Resources Information Center

    Hethrington, Christopher

    2015-01-01

    As diverse social and economic pressures are applied to post-secondary education, innovative approaches to pedagogical methodology are required. Given that the new norm in both industry and academia is that of constant change, a flexible and responsive approach is required along with a framework that empowers students with the skills to become…

  14. SQER[superscript 3]: An Instructional Framework for Using Scientific Inquiry to Design Classroom Demonstrations

    ERIC Educational Resources Information Center

    Chamely-Wiik, Donna M.; Haky, Jerome E.; Louda, Deborah W.; Romance, Nancy

    2014-01-01

    Classroom demonstrations have been widely used to engage students' interest in chemistry. The challenge, however, is to also involve students in science practices and ensure that the demonstration does not become merely a spectator activity. We have developed a framework for creating pedagogically sound demonstrations that allows for easy…

  15. A Framework for the Design and Implementation of Service-Learning Courses

    ERIC Educational Resources Information Center

    Whitley, Meredith A.; Walsh, David S.

    2014-01-01

    Within the fields of kinesiology and physical education teacher education, there is a growing number of courses and curricula that utilize service-learning as a pedagogical strategy. However, these courses and curricula are often constructed, implemented, and evaluated without a strong framework based on literature in the field, which has led to…

  16. Leading by Design: A Collaborative and Creative Leadership Framework for Dance Integration in P-12 Schools

    ERIC Educational Resources Information Center

    Leonard, Alison E.; Hellenbrand, Leah; McShane-Hellenbrand, Karen

    2014-01-01

    This article presents the Mentorship, Integrated Curriculum, Collaboration, and Scholarship (MICCS) framework as an applicable model for transformative, creative, and curriculum-based K-12 dance education and arts integration. Developed and practiced by the authors--an artist/educator, a classroom teacher, and an arts education scholar and former…

  17. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts.

    PubMed

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.

  18. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts

    NASA Astrophysics Data System (ADS)

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.

  19. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts.

    PubMed

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests. PMID:27475551

  20. Dual port memory based Heapsort implementation for FPGA

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.

    2011-10-01

    This document presents a proposal of implementation of the Heapsort algorithm, which utilizes hardware features of modern Field-Programmable Gate Array (FPGA) chips, such as dual port random access memories (DP RAM), to implement efficient sorting of a data stream. The implemented sorter is able to sort one data record every two clock periods. This throughput does not depend on the capacity of the sorter (defined as number of storage cells in the sorter). The mean latency (expressed in sorting cycles - each equal to two clock periods) when sorting the stream of data is equal to the capacity of the sorter. Due to efficient use of FPGA resources (e.g. data are stored mainly in internal block RAMs), the complexity of the sorter is proportional to the logarithm of sorter capacity. Only the required RAM size is linearly proportional to the sorter capacity. The proposed sorter has been tested in simulations and synthesized for real FPGA chips to verify its correctness.