HyperForest: A high performance multi-processor architecture for real-time intelligent systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.
1997-04-01
Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less
Is There Computer Graphics after Multimedia?
ERIC Educational Resources Information Center
Booth, Kellogg S.
Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
Real-time fuzzy inference based robot path planning
NASA Technical Reports Server (NTRS)
Pacini, Peter J.; Teichrow, Jon S.
1990-01-01
This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.A.
1997-07-01
The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less
Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.
Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2018-01-24
Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.
A State-of-the-Art Review of the Real-Time Computer-Aided Study of the Writing Process
ERIC Educational Resources Information Center
Abdel Latif, Muhammad M.
2008-01-01
Writing researchers have developed various methods for investigating the writing process since the 1970s. The early 1980s saw the occurrence of the real-time computer-aided study of the writing process that relies on the protocols generated by recording the computer screen activities as writers compose using the word processor. This article…
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
Real-time computational photon-counting LiDAR
NASA Astrophysics Data System (ADS)
Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles
2018-03-01
The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
Real-Time MENTAT programming language and architecture
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.
1989-01-01
Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.
History of visual systems in the Systems Engineering Simulator
NASA Technical Reports Server (NTRS)
Christianson, David C.
1989-01-01
The Systems Engineering Simulator (SES) houses a variety of real-time computer generated visual systems. The earliest machine dates from the mid-1960's and is one of the first real-time graphics systems in the world. The latest acquisition is the state-of-the-art Evans and Sutherland CT6. Between the span of time from the mid-1960's to the late 1980's, tremendous strides have been made in the real-time graphics world. These strides include advances in both software and hardware engineering. The purpose is to explore the history of the development of these real-time computer generated image systems from the first machine to the present. Hardware advances as well as software algorithm changes are presented. This history is not only quite interesting but also provides us with a perspective with which we can look backward and forward.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automated real-time software development
NASA Technical Reports Server (NTRS)
Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.
1993-01-01
A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.
Real-Time Multiprocessor Programming Language (RTMPL) user's manual
NASA Technical Reports Server (NTRS)
Arpasi, D. J.
1985-01-01
A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
Real-time simulation of the TF30-P-3 turbofan engine using a hybrid computer
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Bruton, W. M.
1974-01-01
A real-time, hybrid-computer simulation of the TF30-P-3 turbofan engine was developed. The simulation was primarily analog in nature but used the digital portion of the hybrid computer to perform bivariate function generation associated with the performance of the engine's rotating components. FORTRAN listings and analog patching diagrams are provided. The hybrid simulation was controlled by a digital computer programmed to simulate the engine's standard hydromechanical control. Both steady-state and dynamic data obtained from the digitally controlled engine simulation are presented. Hybrid simulation data are compared with data obtained from a digital simulation provided by the engine manufacturer. The comparisons indicate that the real-time hybrid simulation adequately matches the baseline digital simulation.
Real time animation of space plasma phenomena
NASA Technical Reports Server (NTRS)
Jordan, K. F.; Greenstadt, E. W.
1987-01-01
In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images.
Documentation Driven Development for Complex Real-Time Systems
2004-12-01
This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.
1991-01-01
The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.
A Real-Time Phase Vector Display for EEG Monitoring
NASA Technical Reports Server (NTRS)
Finger, Herbert J.; Anliker, James E.; Rimmer, Tamara
1973-01-01
A real-time, computer-based, phase vector display system has been developed which will output a vector whose phase is equal to the delay between a trigger and the peak of a function which is quasi-coherent with respect to the trigger. The system also contains a sliding averager which enables the operator to average successive trials before calculating the phase vector. Data collection, averaging and display generation are performed on a LINC-8 computer. Output displays appear on several X-Y CRT display units and on a kymograph camera/oscilloscope unit which is used to generate photographs of time-varying phase vectors or contourograms of time-varying averages of input functions.
PC graphics generation and management tool for real-time applications
NASA Technical Reports Server (NTRS)
Truong, Long V.
1992-01-01
A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
1991-10-01
Real - Time Operating System , Hide Tokuda, et al., Carnegie Mellon University "* MARUTI, Hard Real - Time Operating System , Ashok...Architecture, Fred J. Pollack and Kevin C. Kahn, BiiN 10:00 - 10:20 BREAK 10:20 - 12:20 Session VII - Chair: James G. Smith, ONR • A Real - Time Operating System for...Detailed Description * POSIX: Detailed Description * V: Detailed Description * Real - Time Operating System
Self-Motion Perception: Assessment by Real-Time Computer Generated Animations
NASA Technical Reports Server (NTRS)
Parker, Donald E.
1999-01-01
Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
Augmented Reality Comes to Physics
ERIC Educational Resources Information Center
Buesing, Mark; Cook, Michael
2013-01-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…
Optical Interconnection Via Computer-Generated Holograms
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Zhou, Shaomin
1995-01-01
Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.
The Intelligibility of Non-Vocoded and Vocoded Semantically Anomalous Sentences.
1985-07-26
then vocoded with a real-time channel vocoder (see Gold and Tierney 5 for program description). The Lincoln Digital Signal Processors (LDSPs) - simple...programmable computers of a Harvard architecture - were used to imple- ment the real-time channel vocoder program . Noise was generated within the...ratio at the input was approximately 0 dB. The impor- tant fact to emphasize is that identical vocoding programs were used to generate the Gold and
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
Real-time LMR control parameter generation using advanced adaptive synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, R.W.; Mott, J.E.
1990-01-01
The reactor delta T'', the difference between the average core inlet and outlet temperatures, for the liquid-sodium-cooled Experimental Breeder Reactor 2 is empirically synthesized in real time from, a multitude of examples of past reactor operation. The real-time empirical synthesis is based on reactor operation. The real-time empirical synthesis is based on system state analysis (SSA) technology embodied in software on the EBR 2 data acquisition computer. Before the real-time system is put into operation, a selection of reactor plant measurements is made which is predictable over long periods encompassing plant shutdowns, core reconfigurations, core load changes, and plant startups.more » A serial data link to a personal computer containing SSA software allows the rapid verification of the predictability of these plant measurements via graphical means. After the selection is made, the real-time synthesis provides a fault-tolerant estimate of the reactor delta T accurate to {plus}/{minus}1{percent}. 5 refs., 7 figs.« less
Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2
NASA Astrophysics Data System (ADS)
Makar, Robert J.; O'Toole, Brian E.
1998-07-01
An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.
Design and implementation of a telecommunication interface for the TAATM/TCV real-time experiment
NASA Technical Reports Server (NTRS)
Nolan, J. D.
1981-01-01
The traffic situation display experiment of the terminal configured vehicle (TCV) research program requires a bidirectional data communications tie line between an computer complex. The tie line is used in a real time environment on the CYBER 175 computer by the terminal area air traffic model (TAATM) simulation program. Aircraft position data are processed by TAATM with the resultant output sent to the facility for the generation of air traffic situation displays which are transmitted to a research aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
Computational Modeling and Real-Time Control of Patient-Specific Laser Treatment of Cancer
Fuentes, D.; Oden, J. T.; Diller, K. R.; Hazle, J. D.; Elliott, A.; Shetty, A.; Stafford, R. J.
2014-01-01
An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging (MRTI). The system is built on what can be referred to as cyberinfrastructure - a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in-vivo, canine prostate. Over the course of an 18 minute laser induced thermal therapy (LITT) performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5°C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post operative histology of the canine prostate reveal that the damage region was within the targeted 1.2cm diameter treatment objective. PMID:19148754
Computational modeling and real-time control of patient-specific laser treatment of cancer.
Fuentes, D; Oden, J T; Diller, K R; Hazle, J D; Elliott, A; Shetty, A; Stafford, R J
2009-04-01
An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging. The system is built on what can be referred to as cyberinfrastructure-a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in vivo, canine prostate. Over the course of an 18 min laser-induced thermal therapy performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real-time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5 degrees C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post-operative histology of the canine prostate reveal that the damage region was within the targeted 1.2 cm diameter treatment objective.
WE-AB-BRB-12: Nanoscintillator Fiber-Optic Detector System for Microbeam Radiation Therapy Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera, J; Dooley, J; Chang, S
2015-06-15
Purpose: Microbeam Radiation Therapy (MRT) is an experimental radiation therapy that has demonstrated a higher therapeutic ratio than conventional radiation therapy in animal studies. There are several roadblocks in translating the promising treatment technology to clinical application, one of which is the lack of a real-time, high-resolution dosimeter. Current clinical radiation detectors have poor spatial resolution and, as such, are unsuitable for measuring microbeams with submillimeter-scale widths. Although GafChromic film has high spatial resolution, it lacks the real-time dosimetry capability necessary for MRT preclinical research and potential clinical use. In this work we have demonstrated the feasibility of using amore » nanoscintillator fiber-optic detector (nanoFOD) system for real-time MRT dosimetry. Methods: A microplanar beam array is generated using a x-ray research irradiator and a custom-made, microbeam-forming collimator. The newest generation nanoFOD has an effective size of 70 µm in the measurement direction and was calibrated against a kV ion chamber (RadCal Accu-Pro) in open field geometry. We have written a computer script that performs automatic data collection with immediate background subtraction. A computer-controlled detector positioning stage is used to precisely measure the microbeam peak dose and beam profile by translating the stage during data collection. We test the new generation nanoFOD system, with increased active scintillation volume, against the previous generation system. Both raw and processed data are time-stamped and recorded to enable future post-processing. Results: The real-time microbeam dosimetry system worked as expected. The new generation dosimeter has approximately double the active volume compared to the previous generation resulting in over 900% increase in signal. The active volume of the dosimeter still provided the spatial resolution that meets the Nyquist criterion for our microbeam widths. Conclusion: We have demonstrated that real-time dosimetry of MRT microbeams is feasible using a nanoscintillator fiber-optic detector with integrated positioning system.« less
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
The Traffic Management Advisor
NASA Technical Reports Server (NTRS)
Nedell, William; Erzberger, Heinz; Neuman, Frank
1990-01-01
The traffic management advisor (TMA) is comprised of algorithms, a graphical interface, and interactive tools for controlling the flow of air traffic into the terminal area. The primary algorithm incorporated in it is a real-time scheduler which generates efficient landing sequences and landing times for arrivals within about 200 n.m. from touchdown. A unique feature of the TMA is its graphical interface that allows the traffic manager to modify the computer-generated schedules for specific aircraft while allowing the automatic scheduler to continue generating schedules for all other aircraft. The graphical interface also provides convenient methods for monitoring the traffic flow and changing scheduling parameters during real-time operation.
Real-time dynamics of lattice gauge theories with a few-qubit quantum computer
NASA Astrophysics Data System (ADS)
Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer
2016-06-01
Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Yu, Jeffrey
1990-01-01
Limitations associated with the binary phase-only filter often used in optical correlators are presently circumvented in the writing of complex-valued data on a gray-scale spatial light modulator through the use of a computer-generated hologram (CGH) algorithm. The CGH encodes complex-valued data into nonnegative real CGH data in such a way that it may be encoded in any of the available gray-scale spatial light modulators. A CdS liquid-crystal light valve is used for the complex-valued CGH encoding; computer simulations and experimental results are compared, and the use of such a CGH filter as the synapse hologram in a holographic optical neural net is discussed.
Method of locating underground mines fires
Laage, Linneas; Pomroy, William
1992-01-01
An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.
Self-motion perception: assessment by real-time computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Phillips, J. O.
2001-01-01
We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
Real-time simulation of thermal shadows with EMIT
NASA Astrophysics Data System (ADS)
Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul
2016-05-01
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
Computer-Aided Software Engineering - An approach to real-time software development
NASA Technical Reports Server (NTRS)
Walker, Carrie K.; Turkovich, John J.
1989-01-01
A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.
Real-time operation without a real-time operating system for instrument control and data acquisition
NASA Astrophysics Data System (ADS)
Klein, Randolf; Poglitsch, Albrecht; Fumi, Fabio; Geis, Norbert; Hamidouche, Murad; Hoenle, Rainer; Looney, Leslie; Raab, Walfried; Viehhauser, Werner
2004-09-01
We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS). In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-09-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
Non-harmful insertion of data mimicking computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee
Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.
A Real-time Strategy Agent Framework and Strategy Classifier for Computer Generated Forces
2012-06-01
via our strategy definition schema, plays the game according to the defined strategy. 4 ) Generate a quality RTS data set. 5) Create an accurate and...General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.6 Thesis Overview...Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4 Agent Framework
Apollo experience report: Real-time display system
NASA Technical Reports Server (NTRS)
Sullivan, C. J.; Burbank, L. W.
1976-01-01
The real time display system used in the Apollo Program is described; the systematic organization of the system, which resulted from hardware/software trade-offs and the establishment of system criteria, is emphasized. Each basic requirement of the real time display system was met by a separate subsystem. The computer input multiplexer subsystem, the plotting display subsystem, the digital display subsystem, and the digital television subsystem are described. Also described are the automated display design and the generation of precision photographic reference slides required for the three display subsystems.
NASA Astrophysics Data System (ADS)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun
2014-01-01
In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
Optimal guidance with obstacle avoidance for nap-of-the-earth flight
NASA Technical Reports Server (NTRS)
Pekelsma, Nicholas J.
1988-01-01
The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.
Benchmarking real-time RGBD odometry for light-duty UAVs
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.
2016-06-01
This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.
Accelerated computer generated holography using sparse bases in the STFT domain.
Blinder, David; Schelkens, Peter
2018-01-22
Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.
Robot Control Through Brain Computer Interface For Patterns Generation
NASA Astrophysics Data System (ADS)
Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.
2011-09-01
A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Stilwell, Daniel J; Bishop, Bradley E; Sylvester, Caleb A
2005-08-01
An approach to real-time trajectory generation for platoons of autonomous vehicles is developed from well-known control techniques for redundant robotic manipulators. The partially decentralized structure of this approach permits each vehicle to independently compute its trajectory in real-time using only locally generated information and low-bandwidth feedback generated by a system exogenous to the platoon. Our work is motivated by applications for which communications bandwidth is severely limited, such for platoons of autonomous underwater vehicles. The communication requirements for our trajectory generation approach are independent of the number of vehicles in the platoon, enabling platoons composed of a large number of vehicles to be coordinated despite limited communication bandwidth.
Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs
NASA Astrophysics Data System (ADS)
Edgar, R. G.; Clark, M. A.; Dale, K.; Mitchell, D. A.; Ord, S. M.; Wayth, R. B.; Pfister, H.; Greenhill, L. J.
2010-10-01
The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB s-1, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5 TFLOP s-1 (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exa-scale facilities.
Aggregated channels network for real-time pedestrian detection
NASA Astrophysics Data System (ADS)
Ghorban, Farzin; Marín, Javier; Su, Yu; Colombo, Alessandro; Kummert, Anton
2018-04-01
Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware. In order to alleviate this drawback, most strategies focus on using a two-stage cascade approach. Essentially, in the first stage a fast method generates a significant but reduced amount of high quality proposals that later, in the second stage, are evaluated by the CNN. In this work, we propose a novel detection pipeline that further benefits from the two-stage cascade strategy. More concretely, the enriched and subsequently compressed features used in the first stage are reused as the CNN input. As a consequence, a simpler network architecture, adapted for such small input sizes, allows to achieve real-time performance and obtain results close to the state-of-the-art while running significantly faster without the use of GPU. In particular, considering that the proposed pipeline runs in frame rate, the achieved performance is highly competitive. We furthermore demonstrate that the proposed pipeline on itself can serve as an effective proposal generator.
Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J
2016-11-01
This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p<0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32min/day lower in prompt than in non-prompt workers (p<0.01). In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Srivatsan, Raghavachari; Downing, David R.
1987-01-01
Discussed are the development and testing of a real-time takeoff performance monitoring algorithm. The algorithm is made up of two segments: a pretakeoff segment and a real-time segment. One-time imputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data for that takeoff. The real-time segment uses the scheduled performance data generated in the pretakeoff segment, runway length data, and measured parameters to monitor the performance of the airplane throughout the takeoff roll. Airplane and engine performance deficiencies are detected and annunciated. An important feature of this algorithm is the one-time estimation of the runway rolling friction coefficient. The algorithm was tested using a six-degree-of-freedom airplane model in a computer simulation. Results from a series of sensitivity analyses are also included.
Large holographic displays for real-time applications
NASA Astrophysics Data System (ADS)
Schwerdtner, A.; Häussler, R.; Leister, N.
2008-02-01
Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
NCAR's Experimental Real-time Convection-allowing Ensemble Prediction System
NASA Astrophysics Data System (ADS)
Schwartz, C. S.; Romine, G. S.; Sobash, R.; Fossell, K.
2016-12-01
Since April 2015, the National Center for Atmospheric Research's (NCAR's) Mesoscale and Microscale Meteorology (MMM) Laboratory, in collaboration with NCAR's Computational Information Systems Laboratory (CISL), has been producing daily, real-time, 10-member, 48-hr ensemble forecasts with 3-km horizontal grid spacing over the conterminous United States (http://ensemble.ucar.edu). These computationally-intensive, next-generation forecasts are produced on the Yellowstone supercomputer, have been embraced by both amateur and professional weather forecasters, are widely used by NCAR and university researchers, and receive considerable attention on social media. Initial conditions are supplied by NCAR's Data Assimilation Research Testbed (DART) software and the forecast model is NCAR's Weather Research and Forecasting (WRF) model; both WRF and DART are community tools. This presentation will focus on cutting-edge research results leveraging the ensemble dataset, including winter weather predictability, severe weather forecasting, and power outage modeling. Additionally, the unique design of the real-time analysis and forecast system and computational challenges and solutions will be described.
Physics-based real time ground motion parameter maps: the Central Mexico example
NASA Astrophysics Data System (ADS)
Ramirez Guzman, L.; Contreras Ruiz Esparza, M. G.; Quiroz Ramirez, A.; Carrillo Lucia, M. A.; Perez Yanez, C.
2013-12-01
We present the use of near real time ground motion simulations in the generation of ground motion parameter maps for Central Mexico. Simple algorithm approaches to predict ground motion parameters of civil protection and risk engineering interest are based on the use of observed instrumental values, reported macroseismic intensities and their correlations, and ground motion prediction equations (GMPEs). A remarkable example of the use of this approach is the worldwide Shakemap generation program of the United States Geological Survey (USGS). Nevertheless, simple approaches rely strongly on the availability of instrumental and macroseismic intensity reports, as well as the accuracy of the GMPEs and the site effect amplification calculation. In regions where information is scarce, the GMPEs, a reference value in a mean sense, provide most of the ground motion information together with site effects amplification using a simple parametric approaches (e.g. the use of Vs30), and have proven to be elusive. Here we propose an approach that includes physics-based ground motion predictions (PBGMP) corrected by instrumental information using a Bayesian Kriging approach (Kitanidis, 1983) and apply it to the central region of Mexico. The method assumes: 1) the availability of a large database of low and high frequency Green's functions developed for the region of interest, using fully three-dimensional and representative one-dimension models, 2) enough real time data to obtain the centroid moment tensor and a slip rate function, and 3) a computational infrastructure that can be used to compute the source parameters and generate broadband synthetics in near real time, which will be combined with recorded instrumental data. By using a recently developed velocity model of Central Mexico and an efficient finite element octree-based implementation we generate a database of source-receiver Green's functions, valid to 0.5 Hz, that covers 160 km x 300 km x 700 km of Mexico, including a large portion of the Pacific Mexican subduction zone. A subset of the velocity and strong ground motion data available in real time is processed to obtain the source parameters to generate broadband ground motions in a dense grid ( 10 km x 10 km cells). These are interpolated later with instrumental values using a Bayesian Kriging method. Peak ground velocity and acceleration, as well as SA (T=0.1, 0.5, 1 and 2s) maps, are generated for a small set of medium to large magnitude Mexican earthquakes (Mw=5 to 7.4). We evaluate each map by comparing against stations not considered in the computation.
Real-time Nyquist signaling with dynamic precision and flexible non-integer oversampling.
Schmogrow, R; Meyer, M; Schindler, P C; Nebendahl, B; Dreschmann, M; Meyer, J; Josten, A; Hillerkuss, D; Ben-Ezra, S; Becker, J; Koos, C; Freude, W; Leuthold, J
2014-01-13
We demonstrate two efficient processing techniques for Nyquist signals, namely computation of signals using dynamic precision as well as arbitrary rational oversampling factors. With these techniques along with massively parallel processing it becomes possible to generate and receive high data rate Nyquist signals with flexible symbol rates and bandwidths, a feature which is highly desirable for novel flexgrid networks. We achieved maximum bit rates of 252 Gbit/s in real-time.
Multisite Testing of the Discrete Address Beacon System (DABS).
1981-07-01
downlink messages from an airborne distributed computer system containing , transponder in addition to performing 36 minicomputers, most of which are...the lockout function. organized into groups (or ensembles) of four computers interfaced to a local Each sensor may provide surveillance and data bus...position and velocity. Depending upon computer subsystem, which monitors the means used for scenario generation, in real time all communication and aircraft
Continuous, real time microwave plasma element sensor
Woskov, Paul P.; Smatlak, Donna L.; Cohn, Daniel R.; Wittle, J. Kenneth; Titus, Charles H.; Surma, Jeffrey E.
1995-01-01
Microwave-induced plasma for continuous, real time trace element monitoring under harsh and variable conditions. The sensor includes a source of high power microwave energy and a shorted waveguide made of a microwave conductive, refractory material communicating with the source of the microwave energy to generate a plasma. The high power waveguide is constructed to be robust in a hot, hostile environment. It includes an aperture for the passage of gases to be analyzed and a spectrometer is connected to receive light from the plasma. Provision is made for real time in situ calibration. The spectrometer disperses the light, which is then analyzed by a computer. The sensor is capable of making continuous, real time quantitative measurements of desired elements, such as the heavy metals lead and mercury.
Modern Methods for fast generation of digital holograms
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.
2010-06-01
With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
Fast dictionary generation and searching for magnetic resonance fingerprinting.
Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang
2017-07-01
A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.
Software algorithm and hardware design for real-time implementation of new spectral estimator
2014-01-01
Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214
Nonlinear Real-Time Optical Signal Processing.
1984-10-01
I 1.8 IIII III1 1 / U , 0 7 USCIPI Report 1130 E ~C~,OUTfitA N Ivj) UNIVERSITY OF SOUTHERN CALIFORNIA - I FINAL TECHNICAL REPORT April 15, 1981 - June...30, 1984 N NONLINEAR REAL-TIME OPTICAL SIGNAL PROCESSING i E ~ A.A. Sawchuk, Principal Investigator T.C. Strand and A.R. Tanguay. Jr. October 1, 1984...Erter.d) logic system. A computer generated hologram fabricated on an e -beam system serves as a beamsteering interconnection element. A completely
Beck, J R; Fung, K; Lopez, H; Mongero, L B; Argenziano, M
2015-01-01
Delayed perfusionist identification and reaction to abnormal clinical situations has been reported to contribute to increased mortality and morbidity. The use of automated data acquisition and compliance safety alerts has been widely accepted in many industries and its use may improve operator performance. A study was conducted to evaluate the reaction time of perfusionists with and without the use of compliance alert. A compliance alert is a computer-generated pop-up banner on a pump-mounted computer screen to notify the user of clinical parameters outside of a predetermined range. A proctor monitored and recorded the time from an alert until the perfusionist recognized the parameter was outside the desired range. Group one included 10 cases utilizing compliance alerts. Group 2 included 10 cases with the primary perfusionist blinded to the compliance alerts. In Group 1, 97 compliance alerts were identified and, in group two, 86 alerts were identified. The average reaction time in the group using compliance alerts was 3.6 seconds. The average reaction time in the group not using the alerts was nearly ten times longer than the group using computer-assisted, real-time data feedback. Some believe that real-time computer data acquisition and feedback improves perfusionist performance and may allow clinicians to identify and rectify potentially dangerous situations. © The Author(s) 2014.
V-Man Generation for 3-D Real Time Animation. Chapter 5
NASA Technical Reports Server (NTRS)
Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang
2007-01-01
The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
Engineering study for the functional design of a multiprocessor system
NASA Technical Reports Server (NTRS)
Miller, J. S.; Vandever, W. H.; Stanten, S. F.; Avakian, A. E.; Kosmala, A. L.
1972-01-01
The results are presented of a study to generate a functional system design of a multiprocessing computer system capable of satisfying the computational requirements of a space station. These data management system requirements were specified to include: (1) real time control, (2) data processing and storage, (3) data retrieval, and (4) remote terminal servicing.
Analytic double product integrals for all-frequency relighting.
Wang, Rui; Pan, Minghao; Chen, Weifeng; Ren, Zhong; Zhou, Kun; Hua, Wei; Bao, Hujun
2013-07-01
This paper presents a new technique for real-time relighting of static scenes with all-frequency shadows from complex lighting and highly specular reflections from spatially varying BRDFs. The key idea is to depict the boundaries of visible regions using piecewise linear functions, and convert the shading computation into double product integrals—the integral of the product of lighting and BRDF on visible regions. By representing lighting and BRDF with spherical Gaussians and approximating their product using Legendre polynomials locally in visible regions, we show that such double product integrals can be evaluated in an analytic form. Given the precomputed visibility, our technique computes the visibility boundaries on the fly at each shading point, and performs the analytic integral to evaluate the shading color. The result is a real-time all-frequency relighting technique for static scenes with dynamic, spatially varying BRDFs, which can generate more accurate shadows than the state-of-the-art real-time PRT methods.
Araki, Hiromitsu; Takada, Naoki; Niwase, Hiroaki; Ikawa, Shohei; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-12-01
We propose real-time time-division color electroholography using a single graphics processing unit (GPU) and a simple synchronization system of reference light. To facilitate real-time time-division color electroholography, we developed a light emitting diode (LED) controller with a universal serial bus (USB) module and the drive circuit for reference light. A one-chip RGB LED connected to a personal computer via an LED controller was used as the reference light. A single GPU calculates three computer-generated holograms (CGHs) suitable for red, green, and blue colors in each frame of a three-dimensional (3D) movie. After CGH calculation using a single GPU, the CPU can synchronize the CGH display with the color switching of the one-chip RGB LED via the LED controller. Consequently, we succeeded in real-time time-division color electroholography for a 3D object consisting of around 1000 points per color when an NVIDIA GeForce GTX TITAN was used as the GPU. Furthermore, we implemented the proposed method in various GPUs. The experimental results showed that the proposed method was effective for various GPUs.
2008-02-27
between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be
Program Aids Visualization Of Data
NASA Technical Reports Server (NTRS)
Truong, L. V.
1995-01-01
Living Color Frame System (LCFS) computer program developed to solve some problems that arise in connection with generation of real-time graphical displays of numerical data and of statuses of systems. Need for program like LCFS arises because computer graphics often applied for better understanding and interpretation of data under observation and these graphics become more complicated when animation required during run time. Eliminates need for custom graphical-display software for application programs. Written in Turbo C++.
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.
2012-01-01
A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.
Continuous, real time microwave plasma element sensor
Woskov, P.P.; Smatlak, D.L.; Cohn, D.R.; Wittle, J.K.; Titus, C.H.; Surma, J.E.
1995-12-26
Microwave-induced plasma is described for continuous, real time trace element monitoring under harsh and variable conditions. The sensor includes a source of high power microwave energy and a shorted waveguide made of a microwave conductive, refractory material communicating with the source of the microwave energy to generate a plasma. The high power waveguide is constructed to be robust in a hot, hostile environment. It includes an aperture for the passage of gases to be analyzed and a spectrometer is connected to receive light from the plasma. Provision is made for real time in situ calibration. The spectrometer disperses the light, which is then analyzed by a computer. The sensor is capable of making continuous, real time quantitative measurements of desired elements, such as the heavy metals lead and mercury. 3 figs.
Computational model for behavior shaping as an adaptive health intervention strategy.
Berardi, Vincent; Carretero-González, Ricardo; Klepeis, Neil E; Ghanipoor Machiani, Sahar; Jahangiri, Arash; Bellettiere, John; Hovell, Melbourne
2018-03-01
Adaptive behavioral interventions that automatically adjust in real-time to participants' changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared with a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.
Svoboda, David; Ulman, Vladimir
2017-01-01
The proper analysis of biological microscopy images is an important and complex task. Therefore, it requires verification of all steps involved in the process, including image segmentation and tracking algorithms. It is generally better to verify algorithms with computer-generated ground truth datasets, which, compared to manually annotated data, nowadays have reached high quality and can be produced in large quantities even for 3D time-lapse image sequences. Here, we propose a novel framework, called MitoGen, which is capable of generating ground truth datasets with fully 3D time-lapse sequences of synthetic fluorescence-stained cell populations. MitoGen shows biologically justified cell motility, shape and texture changes as well as cell divisions. Standard fluorescence microscopy phenomena such as photobleaching, blur with real point spread function (PSF), and several types of noise, are simulated to obtain realistic images. The MitoGen framework is scalable in both space and time. MitoGen generates visually plausible data that shows good agreement with real data in terms of image descriptors and mean square displacement (MSD) trajectory analysis. Additionally, it is also shown in this paper that four publicly available segmentation and tracking algorithms exhibit similar performance on both real and MitoGen-generated data. The implementation of MitoGen is freely available.
Characterization of real-time computers
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.
1984-01-01
A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization.
Associative architecture for image processing
NASA Astrophysics Data System (ADS)
Adar, Rutie; Akerib, Avidan
1997-09-01
This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.
Recent achievements in real-time computational seismology in Taiwan
NASA Astrophysics Data System (ADS)
Lee, S.; Liang, W.; Huang, B.
2012-12-01
Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).
Hybrid techniques for the digital control of mechanical and optical systems
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-07-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialised non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP and UDP protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system protoype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
Real-time modeling and simulation of distribution feeder and distributed resources
NASA Astrophysics Data System (ADS)
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
Improved Air Combat Awareness; with AESA and Next-Generation Signal Processing
2002-09-01
competence network Building techniques Software development environment Communication Computer architecture Modeling Real-time programming Radar...memory access, skewed load and store, 3.2 GB/s BW • Performance: 400 MFLOPS Runtime environment Custom runtime routines Driver routines Hardware
Fault recovery for real-time, multi-tasking computer system
NASA Technical Reports Server (NTRS)
Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)
2011-01-01
System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.
Real-time simulation of the retina allowing visualization of each processing stage
NASA Astrophysics Data System (ADS)
Teeters, Jeffrey L.; Werblin, Frank S.
1991-08-01
The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.
2016-12-01
The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.
NASA Astrophysics Data System (ADS)
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
Reactor transient control in support of PFR/TREAT TUCOP experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, D.R.; Larsen, G.R.; Harrison, L.J.
1984-01-01
Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less
Augmented Reality Comes to Physics
NASA Astrophysics Data System (ADS)
Buesing, Mark; Cook, Michael
2013-04-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as Tagwhat and Star Chart (a must for astronomy class). The yellow line marking first downs in a televised football game2 and the enhanced puck that makes televised hockey easier to follow3 both use augmented reality to do the job.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Real-time computer-generated hologram by means of liquid-crystal television spatial light modulator
NASA Technical Reports Server (NTRS)
Mok, Fai; Psaltis, Demetri; Diep, Joseph; Liu, Hua-Kuang
1986-01-01
The usefulness of an inexpensive liquid-crystal television) (LCTV) as a spatial light modulator for coherent-optical processing in the writing and reconstruction of a single computer-generated hologram has been demonstrated. The thickness nonuniformities of the LCTV screen were examined in a Mach-Zehnder interferometer, and the phase distortions were successfully removed using a technique in which the LCTV screen was submerged in a liquid gate filled with an index-matching nonconductive mineral oil with refractive index of about 1.45.
Extracting Depth From Motion Parallax in Real-World and Synthetic Displays
NASA Technical Reports Server (NTRS)
Hecht, Heiko; Kaiser, Mary K.; Aiken, William; Null, Cynthia H. (Technical Monitor)
1994-01-01
In psychophysical studies on human sensitivity to visual motion parallax (MP), the use of computer displays is pervasive. However, a number of potential problems are associated with such displays: cue conflicts arise when observers accommodate to the screen surface, and observer head and body movements are often not reflected in the displays. We investigated observers' sensitivity to depth information in MP (slant, depth order, relative depth) using various real-world displays and their computer-generated analogs. Angle judgments of real-world stimuli were consistently superior to judgments that were based on computer-generated stimuli. Similar results were found for perceived depth order and relative depth. Perceptual competence of observers tends to be underestimated in research that is based on computer generated displays. Such findings cannot be generalized to more realistic viewing situations.
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.
2012-01-01
Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108
In-flight thrust determination on a real-time basis
NASA Technical Reports Server (NTRS)
Ray, R. J.; Carpenter, T.; Sandlin, T.
1984-01-01
A real time computer program was implemented on a F-15 jet fighter to monitor in-flight engine performance of a Digital Electronic Engine Controlled (DEES) F-100 engine. The application of two gas generator methods to calculate in-flight thrust real time is described. A comparison was made between the actual results and those predicted by an engine model simulation. The percent difference between the two methods was compared to the predicted uncertainty based on instrumentation and model uncertainty and agreed closely with the results found during altitude facility testing. Data was obtained from acceleration runs of various altitudes at maximum power settings with and without afterburner. Real time in-flight thrust measurement was a major advancement to flight test productivity and was accomplished with no loss in accuracy over previous post flight methods.
On Real-Time Systems Using Local Area Networks.
1987-07-01
87-35 July, 1987 CS-TR-1892 On Real - Time Systems Using Local Area Networks*I VShem-Tov Levi Department of Computer Science Satish K. Tripathit...1892 On Real - Time Systems Using Local Area Networks* Shem-Tov Levi Department of Computer Science Satish K. Tripathit Department of Computer Science...constraints and the clock systems that feed the time to real - time systems . A model for real-time system based on LAN communication is presented in
Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm.
Tizzoni, Michele; Bajardi, Paolo; Poletto, Chiara; Ramasco, José J; Balcan, Duygu; Gonçalves, Bruno; Perra, Nicola; Colizza, Vittoria; Vespignani, Alessandro
2012-12-13
Mathematical and computational models for infectious diseases are increasingly used to support public-health decisions; however, their reliability is currently under debate. Real-time forecasts of epidemic spread using data-driven models have been hindered by the technical challenges posed by parameter estimation and validation. Data gathered for the 2009 H1N1 influenza crisis represent an unprecedented opportunity to validate real-time model predictions and define the main success criteria for different approaches. We used the Global Epidemic and Mobility Model to generate stochastic simulations of epidemic spread worldwide, yielding (among other measures) the incidence and seeding events at a daily resolution for 3,362 subpopulations in 220 countries. Using a Monte Carlo Maximum Likelihood analysis, the model provided an estimate of the seasonal transmission potential during the early phase of the H1N1 pandemic and generated ensemble forecasts for the activity peaks in the northern hemisphere in the fall/winter wave. These results were validated against the real-life surveillance data collected in 48 countries, and their robustness assessed by focusing on 1) the peak timing of the pandemic; 2) the level of spatial resolution allowed by the model; and 3) the clinical attack rate and the effectiveness of the vaccine. In addition, we studied the effect of data incompleteness on the prediction reliability. Real-time predictions of the peak timing are found to be in good agreement with the empirical data, showing strong robustness to data that may not be accessible in real time (such as pre-exposure immunity and adherence to vaccination campaigns), but that affect the predictions for the attack rates. The timing and spatial unfolding of the pandemic are critically sensitive to the level of mobility data integrated into the model. Our results show that large-scale models can be used to provide valuable real-time forecasts of influenza spreading, but they require high-performance computing. The quality of the forecast depends on the level of data integration, thus stressing the need for high-quality data in population-based models, and of progressive updates of validated available empirical knowledge to inform these models.
Luján, J L; Crago, P E
2004-11-01
Neuroprosthestic systems can be used to restore hand grasp and wrist control in individuals with C5/C6 spinal cord injury. A computer-based system was developed for the implementation, tuning and clinical assessment of neuroprosthetic controllers, using off-the-shelf hardware and software. The computer system turned a Pentium III PC running Windows NT into a non-dedicated, real-time system for the control of neuroprostheses. Software execution (written using the high-level programming languages LabVIEW and MATLAB) was divided into two phases: training and real-time control. During the training phase, the computer system collected input/output data by stimulating the muscles and measuring the muscle outputs in real-time, analysed the recorded data, generated a set of training data and trained an artificial neural network (ANN)-based controller. During real-time control, the computer system stimulated the muscles using stimulus pulsewidths predicted by the ANN controller in response to a sampled input from an external command source, to provide independent control of hand grasp and wrist posture. System timing was stable, reliable and capable of providing muscle stimulation at frequencies up to 24Hz. To demonstrate the application of the test-bed, an ANN-based controller was implemented with three inputs and two independent channels of stimulation. The ANN controller's ability to control hand grasp and wrist angle independently was assessed by quantitative comparison of the outputs of the stimulated muscles with a set of desired grasp or wrist postures determined by the command signal. Controller performance results were mixed, but the platform provided the tools to implement and assess future controller designs.
Interconnection requirements in avionic systems
NASA Astrophysics Data System (ADS)
Vergnolle, Claude; Houssay, Bruno
1991-04-01
The future aircraft generation will have thousand smart electromagnetic sensors distributed allover. Each sensor is connected with fibers links to the main-frame computer in charge of the real time signal''s correlation. Such a computer must be compactly built and massively parallel: it needs the use of 3 D optical free-space interconnect between neighbouring boards and reconfigurable interconnects via holographic backplane. The optical interconnect facilities will be also used to build fault-tolerant computer through large redundancy.
Real time 3D structural and Doppler OCT imaging on graphics processing units
NASA Astrophysics Data System (ADS)
Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr
2013-03-01
In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.
A tool for modeling concurrent real-time computation
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.
1990-01-01
Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
Real-time simulation of three-dimensional shoulder girdle and arm dynamics.
Chadwick, Edward K; Blana, Dimitra; Kirsch, Robert F; van den Bogert, Antonie J
2014-07-01
Electrical stimulation is a promising technology for the restoration of arm function in paralyzed individuals. Control of the paralyzed arm under electrical stimulation, however, is a challenging problem that requires advanced controllers and command interfaces for the user. A real-time model describing the complex dynamics of the arm would allow user-in-the-loop type experiments where the command interface and controller could be assessed. Real-time models of the arm previously described have not included the ability to model the independently controlled scapula and clavicle, limiting their utility for clinical applications of this nature. The goal of this study therefore was to evaluate the performance and mechanical behavior of a real-time, dynamic model of the arm and shoulder girdle. The model comprises seven segments linked by eleven degrees of freedom and actuated by 138 muscle elements. Polynomials were generated to describe the muscle lines of action to reduce computation time, and an implicit, first-order Rosenbrock formulation of the equations of motion was used to increase simulation step-size. The model simulated flexion of the arm faster than real time, simulation time being 92% of actual movement time on standard desktop hardware. Modeled maximum isometric torque values agreed well with values from the literature, showing that the model simulates the moment-generating behavior of a real human arm. The speed of the model enables experiments where the user controls the virtual arm and receives visual feedback in real time. The ability to optimize potential solutions in simulation greatly reduces the burden on the user during development.
FPGA cluster for high-performance AO real-time control system
NASA Astrophysics Data System (ADS)
Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.
2006-06-01
Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Compact quantum random number generator based on superluminescent light-emitting diodes
NASA Astrophysics Data System (ADS)
Wei, Shihai; Yang, Jie; Fan, Fan; Huang, Wei; Li, Dashuang; Xu, Bingjie
2017-12-01
By measuring the amplified spontaneous emission (ASE) noise of the superluminescent light emitting diodes, we propose and realize a quantum random number generator (QRNG) featured with practicability. In the QRNG, after the detection and amplification of the ASE noise, the data acquisition and randomness extraction which is integrated in a field programmable gate array (FPGA) are both implemented in real-time, and the final random bit sequences are delivered to a host computer with a real-time generation rate of 1.2 Gbps. Further, to achieve compactness, all the components of the QRNG are integrated on three independent printed circuit boards with a compact design, and the QRNG is packed in a small enclosure sized 140 mm × 120 mm × 25 mm. The final random bit sequences can pass all the NIST-STS and DIEHARD tests.
Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro
2012-09-10
We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.
NASA Astrophysics Data System (ADS)
Baca, Michael J.
1990-09-01
A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan
2016-01-01
The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less
Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating
NASA Astrophysics Data System (ADS)
Chen, Liangji; Guo, Guangsong; Li, Huiying
2017-07-01
NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
SOFTWARE DESIGN FOR REAL-TIME SYSTEMS.
Real-time computer systems and real-time computations are defined for the purposes of this report. The design of software for real - time systems is...discussed, employing the concept that all real - time systems belong to one of two types. The types are classified according to the type of control...program used; namely: Pre-assigned Iterative Cycle and Real-time Queueing. The two types of real - time systems are described in general, with supplemental
MICROPROCESSOR-BASED DATA-ACQUISITION SYSTEM FOR A BOREHOLE RADAR.
Bradley, Jerry A.; Wright, David L.
1987-01-01
An efficient microprocessor-based system is described that permits real-time acquisition, stacking, and digital recording of data generated by a borehole radar system. Although the system digitizes, stacks, and records independently of a computer, it is interfaced to a desktop computer for program control over system parameters such as sampling interval, number of samples, number of times the data are stacked prior to recording on nine-track tape, and for graphics display of the digitized data. The data can be transferred to the desktop computer during recording, or it can be played back from a tape at a latter time. Using the desktop computer, the operator observes results while recording data and generates hard-copy graphics in the field. Thus, the radar operator can immediately evaluate the quality of data being obtained, modify system parameters, study the radar logs before leaving the field, and rerun borehole logs if necessary. The system has proven to be reliable in the field and has increased productivity both in the field and in the laboratory.
Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosen, P.B.
1994-06-01
The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package calledmore » RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.« less
Real-time implementation of an interactive jazz accompaniment system
NASA Astrophysics Data System (ADS)
Deshpande, Nikhil
Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.
Real-Time Optimization and Control of Next-Generation Distribution
Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation
Real-Time Detection of In-flight Aircraft Damage
Blair, Brenton; Lee, Herbert K. H.; Davies, Misty
2017-10-02
When there is damage to an aircraft, it is critical to be able to quickly detect and diagnose the problem so that the pilot can attempt to maintain control of the aircraft and land it safely. We develop methodology for real-time classification of flight trajectories to be able to distinguish between an undamaged aircraft and five different damage scenarios. Principal components analysis allows a lower-dimensional representation of multi-dimensional trajectory information in time. Random Forests provide a computationally efficient approach with sufficient accuracy to be able to detect and classify the different scenarios in real-time. We demonstrate our approach by classifyingmore » realizations of a 45 degree bank angle generated from the Generic Transport Model flight simulator in collaboration with NASA.« less
Real-Time Detection of In-flight Aircraft Damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, Brenton; Lee, Herbert K. H.; Davies, Misty
When there is damage to an aircraft, it is critical to be able to quickly detect and diagnose the problem so that the pilot can attempt to maintain control of the aircraft and land it safely. We develop methodology for real-time classification of flight trajectories to be able to distinguish between an undamaged aircraft and five different damage scenarios. Principal components analysis allows a lower-dimensional representation of multi-dimensional trajectory information in time. Random Forests provide a computationally efficient approach with sufficient accuracy to be able to detect and classify the different scenarios in real-time. We demonstrate our approach by classifyingmore » realizations of a 45 degree bank angle generated from the Generic Transport Model flight simulator in collaboration with NASA.« less
High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)
2001-01-01
A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.
Forecasting techno-social systems: how physics and computing help to fight off global pandemics
NASA Astrophysics Data System (ADS)
Vespignani, Alessandro
2010-03-01
The crucial issue when planning for adequate public health interventions to mitigate the spread and impact of epidemics is risk evaluation and forecast. This amount to the anticipation of where, when and how strong the epidemic will strike. In the last decade advances in performance in computer technology, data acquisition, statistical physics and complex networks theory allow the generation of sophisticated simulations on supercomputer infrastructures to anticipate the spreading pattern of a pandemic. For the first time we are in the position of generating real time forecast of epidemic spreading. I will review the history of the current H1N1 pandemic, the major road-blocks the community has faced in its containment and mitigation and how physics and computing provide predictive tools that help us to battle epidemics.
1988-03-01
Kernel System (GKS). This combination of hardware and software allows real-time generation of maps using DMA digitized data.[Ref. 4: p. 44, 46] Though...releases are in MST*.BOO. MSV55X.BOO Sanyo MBC-550 with IBM compatible video board MSVAP3.BOO NEC APC3 MSVAPC.BOO NEC APC MSVAPR.BOO ACT Apricot MSVDM2
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automatic mathematical modeling for real time simulation program (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1989-01-01
A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.
GPU-based efficient realistic techniques for bleeding and smoke generation in surgical simulators.
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-12-01
In actual surgery, smoke and bleeding due to cauterization processes provide important visual cues to the surgeon, which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated the effects of bleeding and smoke generation, they are not realistic due to the requirement of real-time performance. To be interactive, visual update must be performed at at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques, since other computationally intensive processes compete for the available Central Processing Unit (CPU) resources. In this study we developed a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators, which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. The smoke and bleeding simulation were implemented as part of a laparoscopic adjustable gastric banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur noticeable overhead. However, for smoke generation, an input/output (I/O) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited to VR-based surgical simulators. Copyright © 2010 John Wiley & Sons, Ltd.
GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-01-01
Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651
NASA Astrophysics Data System (ADS)
Jelinek, H. J.
1986-01-01
This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.
[Magical and physical reality].
Kállai, János
In the postmodern countries the computer generated virtual reality provides new perceptual domains wherein the evaluation of real and unreal contents generates an essential challenge for both children and adults. The expectances to perceive unreal content which is contradictory with the common sense experiences become seductive for most of people. The time in front of the screen that emits the magic reality gradually rises. The sudden advance in generation of alternative realities demands that we have to recall the basic principles of psychological reality testing and the involving mechanism that produces a distinction between phantasy and reality for both healthy and pathological mind. Frame of reference usually restrains the thinking. This review contains two parts, the first is focuses on the historical aspect of magical and physical reality and the second one, that will be published in a next issue, will present an evaluation of the boundary between self and another person in point of view of the psychopathological phenomenon. This analysis will focus on how the boundary of the self behaves in physically real and magic computer generated environment.
Neal, Robert E; Kavnoudias, Helen; Thomson, Kenneth R
2015-06-01
Irreversible electroporation (IRE) ablation uses a series of brief electric pulses to create nanoscale defects in cell membranes, killing the cells. It has shown promise in numerous soft-tissue tumor applications. Larger voltages between electrodes will increase ablation volume, but exceeding electrical limits may risk damage to the patient, cause ineffective therapy delivery, or require generator restart. Monitoring electrical current for these conditions in real-time enables managing these risks. This capacity is not presently available in clinical IRE generators. We describe a system using a Tektronix TCP305 AC/DC Current Probe connected to a TCPA300 AC/DC Current Probe Amplifier, which is read on a computer using a Protek DSO-2090 USB computer-interfacing oscilloscope. Accuracy of the system was tested with a resistor circuit and by comparing measured currents with final outputs from the NanoKnife clinical electroporation pulse generator. Accuracy of measured currents was 1.64 ± 2.4 % relative to calculations for the resistor circuit and averaged 0.371 ± 0.977 % deviation from the NanoKnife. During clinical pulse delivery, the system offers real-time evaluation of IRE procedure progress and enables a number of methods for identifying approaching issues from electrical behavior of therapy delivery, facilitating protocol changes before encountering therapy delivery issues. This system can monitor electrical currents in real-time without altering the electric pulses or modifying the pulse generator. This facilitates delivering electric pulse protocols that remain within the optimal range of electrical currents-sufficient strength for clinically relevant ablation volumes, without the risk of exceeding safe electric currents or causing inadequate ablation.
Building an organic computing device with multiple interconnected brains
Pais-Vieira, Miguel; Chiuffa, Gabriela; Lebedev, Mikhail; Yadav, Amol; Nicolelis, Miguel A. L.
2015-01-01
Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers. PMID:26158615
An assessment of the potential of PFEM-2 for solving long real-time industrial applications
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; Ramajo, Damián E.; Márquez Damián, Santiago; Nigro, Norberto M.; Idelsohn, Sergio R.
2017-07-01
The latest generation of the particle finite element method (PFEM-2) is a numerical method based on the Lagrangian formulation of the equations, which presents advantages in terms of robustness and efficiency over classical Eulerian methodologies when certain kind of flows are simulated, especially those where convection plays an important role. These situations are often encountered in real engineering problems, where very complex geometries and operating conditions require very large and long computations. The advantages that the parallelism introduced in the computational fluid dynamics making affordable computations with very fine spatial discretizations are well known. However, it is not possible to have the time parallelized, despite the effort that is being dedicated to use space-time formulations. In this sense, PFEM-2 adds a valuable feature in that its strong stability with little loss of accuracy provides an interesting way of satisfying the real-life computation needs. After having already demonstrated in previous publications its ability to achieve academic-based solutions with a good compromise between accuracy and efficiency, in this work, the method is revisited and employed to solve several nonacademic problems of technological interest, which fall into that category. Simulations concerning oil-water separation, waste-water treatment, metallurgical foundries, and safety assessment are presented. These cases are selected due to their particular requirements of long simulation times and or intensive interface treatment. Thus, large time-steps may be employed with PFEM-2 without compromising the accuracy and robustness of the simulation, as occurs with Eulerian alternatives, showing the potentiality of the methodology for solving not only academic tests but also real engineering problems.
Systolic Processor Array For Recognition Of Spectra
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Peterson, John C.
1995-01-01
Spectral signatures of materials detected and identified quickly. Spectral Analysis Systolic Processor Array (SPA2) relatively inexpensive and satisfies need to analyze large, complex volume of multispectral data generated by imaging spectrometers to extract desired information: computational performance needed to do this in real time exceeds that of current supercomputers. Locates highly similar segments or contiguous subsegments in two different spectra at time. Compares sampled spectra from instruments with data base of spectral signatures of known materials. Computes and reports scores that express degrees of similarity between sampled and data-base spectra.
Zrenner, Christoph; Eytan, Danny; Wallach, Avner; Thier, Peter; Marom, Shimon
2010-01-01
Distinct modules of the neural circuitry interact with each other and (through the motor-sensory loop) with the environment, forming a complex dynamic system. Neuro-prosthetic devices seeking to modulate or restore CNS function need to interact with the information flow at the level of neural modules electrically, bi-directionally and in real-time. A set of freely available generic tools is presented that allow computationally demanding multi-channel short-latency bi-directional interactions to be realized in in vivo and in vitro preparations using standard PC data acquisition and processing hardware and software (Mathworks Matlab and Simulink). A commercially available 60-channel extracellular multi-electrode recording and stimulation set-up connected to an ex vivo developing cortical neuronal culture is used as a model system to validate the method. We demonstrate how complex high-bandwidth (>10 MBit/s) neural recording data can be analyzed in real-time while simultaneously generating specific complex electrical stimulation feedback with deterministically timed responses at sub-millisecond resolution. PMID:21060803
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.
2007-04-01
AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.
Liu, Gangjun; Zhang, Jun; Yu, Lingfeng; Xie, Tuqiang; Chen, Zhongping
2010-01-01
With the increase of the A-line speed of optical coherence tomography (OCT) systems, real-time processing of acquired data has become a bottleneck. The shared-memory parallel computing technique is used to process OCT data in real time. The real-time processing power of a quad-core personal computer (PC) is analyzed. It is shown that the quad-core PC could provide real-time OCT data processing ability of more than 80K A-lines per second. A real-time, fiber-based, swept source polarization-sensitive OCT system with 20K A-line speed is demonstrated with this technique. The real-time 2D and 3D polarization-sensitive imaging of chicken muscle and pig tendon is also demonstrated. PMID:19904337
NASA Technical Reports Server (NTRS)
Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.
1993-01-01
Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
NASA Astrophysics Data System (ADS)
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.J.; Khatib-Rahbar, M.; Esmaili, H.
2002-07-01
The Accident Diagnostic, Analysis and Management (ADAM) computer code has been developed as a tool for on-line applications to accident diagnostics, simulation, management and training. ADAM's severe accident simulation capabilities incorporate a balance of mechanistic, phenomenologically based models with simple parametric approaches for elements including (but not limited to) thermal hydraulics; heat transfer; fuel heatup, meltdown, and relocation; fission product release and transport; combustible gas generation and combustion; and core-concrete interaction. The overall model is defined by a relatively coarse spatial nodalization of the reactor coolant and containment systems and is advanced explicitly in time. The result is to enablemore » much faster than real time (i.e., 100 to 1000 times faster than real time on a personal computer) applications to on-line investigations and/or accident management training. Other features of the simulation module include provision for activation of water injection, including the Engineered Safety Features, as well as other mechanisms for the assessment of accident management and recovery strategies and the evaluation of PSA success criteria. The accident diagnostics module of ADAM uses on-line access to selected plant parameters (as measured by plant sensors) to compute the thermodynamic state of the plant, and to predict various margins to safety (e.g., times to pressure vessel saturation and steam generator dryout). Rule-based logic is employed to classify the measured data as belonging to one of a number of likely scenarios based on symptoms, and a number of 'alarms' are generated to signal the state of the reactor and containment. This paper will address the features and limitations of ADAM with particular focus on accident simulation and management. (authors)« less
Neuromorphic VLSI vision system for real-time texture segregation.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2008-10-01
The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.
NASA Astrophysics Data System (ADS)
Onken, Jeffrey
This dissertation introduces a multidisciplinary framework for the enabling of future research and analysis of alternatives for control centers for real-time operations of safety-critical systems. The multidisciplinary framework integrates functional and computational models that describe the dynamics in fundamental concepts of previously disparate engineering and psychology research disciplines, such as group performance and processes, supervisory control, situation awareness, events and delays, and expertise. The application in this dissertation is the real-time operations within the NASA Mission Control Center in Houston, TX. This dissertation operationalizes the framework into a model and simulation, which simulates the functional and computational models in the framework according to user-configured scenarios for a NASA human-spaceflight mission. The model and simulation generates data according to the effectiveness of the mission-control team in supporting the completion of mission objectives and detecting, isolating, and recovering from anomalies. Accompanying the multidisciplinary framework is a proof of concept, which demonstrates the feasibility of such a framework. The proof of concept demonstrates that variability occurs where expected based on the models. The proof of concept also demonstrates that the data generated from the model and simulation is useful for analyzing and comparing MCC configuration alternatives because an investigator can give a diverse set of scenarios to the simulation and the output compared in detail to inform decisions about the effect of MCC configurations on mission operations performance.
Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.
Durandau, Guillaume; Farina, Dario; Sartori, Massimo
2018-03-01
Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.
Compute Element and Interface Box for the Hazard Detection System
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.;
2013-01-01
The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.
Summary: Experimental validation of real-time fault-tolerant systems
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Choi, G. S.
1992-01-01
Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Optimization of dynamic soaring maneuvers to enhance endurance of a versatile UAV
NASA Astrophysics Data System (ADS)
Mir, Imran; Maqsood, Adnan; Akhtar, Suhail
2017-06-01
Dynamic soaring is a process of acquiring energy available in atmospheric wind shears and is commonly exhibited by soaring birds to perform long distance flights. This paper aims to demonstrate a viable algorithm which can be implemented in near real time environment to formulate optimal trajectories for dynamic soaring maneuvers for a small scale Unmanned Aerial Vehicle (UAV). The objective is to harness maximum energy from atmosphere wind shear to improve loiter time for Intelligence, Surveillance and Reconnaissance (ISR) missions. Three-dimensional point-mass UAV equations of motion and linear wind gradient profile are used to model flight dynamics. Utilizing UAV states, controls, operational constraints, initial and terminal conditions that enforce a periodic flight, dynamic soaring problem is formulated as an optimal control problem. Optimized trajectories of the maneuver are subsequently generated employing pseudo spectral techniques against distant UAV performance parameters. The discussion also encompasses the requirement for generation of optimal trajectories for dynamic soaring in real time environment and the ability of the proposed algorithm for speedy solution generation. Coupled with the fact that dynamic soaring is all about immediately utilizing the available energy from the wind shear encountered, the proposed algorithm promises its viability for practical on board implementations requiring computation of trajectories in near real time.
Cluster Computing for Embedded/Real-Time Systems
NASA Technical Reports Server (NTRS)
Katz, D.; Kepner, J.
1999-01-01
Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.
NASA Astrophysics Data System (ADS)
Moliner, L.; Correcher, C.; Gimenez-Alventosa, V.; Ilisie, V.; Alvarez, J.; Sanchez, S.; Rodríguez-Alvarez, M. J.
2017-11-01
Nowadays, with the increase of the computational power of modern computers together with the state-of-the-art reconstruction algorithms, it is possible to obtain Positron Emission Tomography (PET) images in practically real time. These facts open the door to new applications such as radio-pharmaceuticals tracking inside the body or the use of PET for image-guided procedures, such as biopsy interventions, among others. This work is a proof of concept that aims to improve the user experience with real time PET images. Fixed, incremental, overlapping, sliding and hybrid windows are the different statistical combinations of data blocks used to generate intermediate images in order to follow the path of the activity in the Field Of View (FOV). To evaluate these different combinations, a point source is placed in a dedicated breast PET device and moved along the FOV. These acquisitions are reconstructed according to the different statistical windows, resulting in a smoother transition of positions for the image reconstructions that use the sliding and hybrid window.
Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks
2017-01-01
In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities’ authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol. PMID:28946633
Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks.
Nkenyereye, Lewis; Kwon, Joonho; Choi, Yoon-Ho
2017-09-23
In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities' authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol.
A High Performance Computer Architecture for Embedded And/Or Multi-Computer Applications
1990-09-01
commercially available, real - time operating system . CHOICES and ARTS are real-time operating systems developed at the University of Illinois and CMU...respectively. Selection of a real - time operating system will be made in the next phase of the project. U BIBLIOGRAPHY U Wulf, Wm. A. The WM Computer
Development of IR imaging system simulator
NASA Astrophysics Data System (ADS)
Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu
2017-02-01
To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.
FPGA-based real-time embedded system for RISS/GPS integrated navigation.
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.
FPGA-Based Real-Time Embedded System for RISS/GPS Integrated Navigation
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm. PMID:22368460
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in computational efficiency on board and in optimized time constraints.
NASA Astrophysics Data System (ADS)
Salathé, Yves; Kurpiers, Philipp; Karg, Thomas; Lang, Christian; Andersen, Christian Kraglund; Akin, Abdulkadir; Krinner, Sebastian; Eichler, Christopher; Wallraff, Andreas
2018-03-01
Quantum computing architectures rely on classical electronics for control and readout. Employing classical electronics in a feedback loop with the quantum system allows us to stabilize states, correct errors, and realize specific feedforward-based quantum computing and communication schemes such as deterministic quantum teleportation. These feedback and feedforward operations are required to be fast compared to the coherence time of the quantum system to minimize the probability of errors. We present a field-programmable-gate-array-based digital signal processing system capable of real-time quadrature demodulation, a determination of the qubit state, and a generation of state-dependent feedback trigger signals. The feedback trigger is generated with a latency of 110 ns with respect to the timing of the analog input signal. We characterize the performance of the system for an active qubit initialization protocol based on the dispersive readout of a superconducting qubit and discuss potential applications in feedback and feedforward algorithms.
Reconfigurable optical interconnections via dynamic computer-generated holograms
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Zhou, Shaomin (Inventor)
1994-01-01
A system is proposed for optically providing one-to-many irregular interconnections, and strength-adjustable many-to-many irregular interconnections which may be provided with strengths (weights) w(sub ij) using multiple laser beams which address multiple holograms and means for combining the beams modified by the holograms to form multiple interconnections, such as a cross-bar switching network. The optical means for interconnection is based on entering a series of complex computer-generated holograms on an electrically addressed spatial light modulator for real-time reconfigurations, thus providing flexibility for interconnection networks for largescale practical use. By employing multiple sources and holograms, the number of interconnection patterns achieved is increased greatly.
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
NASA Technical Reports Server (NTRS)
Kusmanoff, Antone; Martin, Nancy L.
1989-01-01
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.
NASA Astrophysics Data System (ADS)
Kulisek, J. A.; Schweppe, J. E.; Stave, S. C.; Bernacki, B. E.; Jordan, D. V.; Stewart, T. N.; Seifert, C. E.; Kernan, W. J.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this challenge, we have developed a new technique for real-time estimation of background gamma radiation from aerial measurements without the need for human analyst intervention. The method can be calibrated using radiation transport simulations along with data from previous flights over areas for which the isotopic composition need not be known. Over the examined measured and simulated data sets, the method generated accurate background estimates even in the presence of a strong, 60Co source. The potential to track large and abrupt changes in background spectral shape and magnitude was demonstrated. The method can be implemented fairly easily in most modern computing languages and environments.
ISPATOM: A Generic Real-Time Data Processing Tool Without Programming
NASA Technical Reports Server (NTRS)
Dershowitz, Adam
2007-01-01
Information Sharing Protocol Advanced Tool of Math (ISPATOM) is an application program allowing for the streamlined generation of comps, which subscribe to streams of incoming telemetry data, perform any necessary computations on the data, then send the data to other programs for display and/or further processing in NASA mission control centers. Heretofore, the development of comps was difficult, expensive, and time-consuming: Each comp was custom written manually, in a low-level computing language, by a programmer attempting to follow requirements of flight controllers. ISPATOM enables a flight controller who is not a programmer to write a comp by simply typing in one or more equation( s) at a command line or retrieving the equation(s) from a text file. ISPATOM then subscribes to the necessary input data, performs all of necessary computations, and sends out the results. It sends out new results whenever the input data change. The use of equations in ISPATOM is no more difficult than is entering equations in a spreadsheet. The time involved in developing a comp is thus limited to the time taken to decide on the necessary equations. Thus, ISPATOM is a real-time dynamic calculator.
Visual based laser speckle pattern recognition method for structural health monitoring
NASA Astrophysics Data System (ADS)
Park, Kyeongtaek; Torbol, Marco
2017-04-01
This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.
Renkawitz, Tobias; Tingart, Markus; Grifka, Joachim; Sendtner, Ernst; Kalteis, Thomas
2009-09-01
This article outlines the scientific basis and a state-of-the-art application of computer-assisted orthopedic surgery in total hip arthroplasty (THA) and provides a future perspective on this technology. Computer-assisted orthopedic surgery in primary THA has the potential to couple 3D simulations with real-time evaluations of surgical performance, which has brought these developments from the research laboratory all the way to clinical use. Nonimage- or imageless-based navigation systems without the need for additional pre- or intra-operative image acquisition have stood the test to significantly reduce the variability in positioning the acetabular component and have shown precise measurement of leg length and offset changes during THA. More recently, computer-assisted orthopedic surgery systems have opened a new frontier for accurate surgical practice in minimally invasive, tissue-preserving THA. The future generation of imageless navigation systems will switch from simple measurement tasks to real navigation tools. These software algorithms will consider the cup and stem as components of a coupled biomechanical system, navigating the orthopedic surgeon to find an optimized complementary component orientation rather than target values intraoperatively, and are expected to have a high impact on clinical practice and postoperative functionality in modern THA.
A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data
NASA Astrophysics Data System (ADS)
Evans, J. D.; Valente, E. G.; Chettri, S. S.
2011-12-01
We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.
Viewing ISS Data in Real Time via the Internet
NASA Technical Reports Server (NTRS)
Myers, Gerry; Chamberlain, Jim
2004-01-01
EZStream is a computer program that enables authorized users at diverse terrestrial locations to view, in real time, data generated by scientific payloads aboard the International Space Station (ISS). The only computation/communication resource needed for use of EZStream is a computer equipped with standard Web-browser software and a connection to the Internet. EZStream runs in conjunction with the TReK software, described in a prior NASA Tech Briefs article, that coordinates multiple streams of data for the ground communication system of the ISS. EZStream includes server components that interact with TReK within the ISS ground communication system and client components that reside in the users' remote computers. Once an authorized client has logged in, a server component of EZStream pulls the requested data from a TReK application-program interface and sends the data to the client. Future EZStream enhancements will include (1) extensions that enable the server to receive and process arbitrary data streams on its own and (2) a Web-based graphical-user-interface-building subprogram that enables a client who lacks programming expertise to create customized display Web pages.
Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust
NASA Astrophysics Data System (ADS)
Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.
2014-12-01
We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
GLAS Spacecraft Pointing Study
NASA Technical Reports Server (NTRS)
Born, George H.; Gold, Kenn; Ondrey, Michael; Kubitschek, Dan; Axelrad, Penina; Komjathy, Attila
1998-01-01
Science requirements for the GLAS mission demand that the laser altimeter be pointed to within 50 m of the location of the previous repeat ground track. The satellite will be flown in a repeat orbit of 182 days. Operationally, the required pointing information will be determined on the ground using the nominal ground track, to which pointing is desired, and the current propagated orbit of the satellite as inputs to the roll computation algorithm developed by CCAR. The roll profile will be used to generate a set of fit coefficients which can be uploaded on a daily basis and used by the on-board attitude control system. In addition, an algorithm has been developed for computation of the associated command quaternions which will be necessary when pointing at targets of opportunity. It may be desirable in the future to perform the roll calculation in an autonomous real-time mode on-board the spacecraft. GPS can provide near real-time tracking of the satellite, and the nominal ground track can be stored in the on-board computer. It will be necessary to choose the spacing of this nominal ground track to meet storage requirements in the on-board environment. Several methods for generating the roll profile from a sparse reference ground track are presented.
Single frequency GPS measurements in real-time artificial satellite orbit determination
NASA Astrophysics Data System (ADS)
Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.
2003-07-01
A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera
NASA Astrophysics Data System (ADS)
Trichopoulos, Georgios C.; Sertel, Kubilay
2015-07-01
We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.
Computing Quantitative Characteristics of Finite-State Real-Time Systems
1994-05-04
Current methods for verifying real - time systems are essentially decision procedures that establish whether the system model satisfies a given...specification. We present a general method for computing quantitative information about finite-state real - time systems . We have developed algorithms that...our technique can be extended to a more general representation of real - time systems , namely, timed transition graphs. The algorithms presented in this
A New Real - Time Fault Detection Methodology for Systems Under Test. Phase 1
NASA Technical Reports Server (NTRS)
Johnson, Roger W.; Jayaram, Sanjay; Hull, Richard A.
1998-01-01
The purpose of this research is focussed on the identification/demonstration of critical technology innovations that will be applied to various applications viz. Detection of automated machine Health Monitoring (BM, real-time data analysis and control of Systems Under Test (SUT). This new innovation using a High Fidelity Dynamic Model-based Simulation (BFDMS) approach will be used to implement a real-time monitoring, Test and Evaluation (T&E) methodology including the transient behavior of the system under test. The unique element of this process control technique is the use of high fidelity, computer generated dynamic models to replicate the behavior of actual Systems Under Test (SUT). It will provide a dynamic simulation capability that becomes the reference truth model, from which comparisons are made with the actual raw/conditioned data from the test elements.
Scalable Prediction of Energy Consumption using Incremental Time Series Clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Noor, Muhammad Usman
2013-10-09
Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Azad, Nasser L.
2015-06-01
The main proposition of the current investigation is to develop a computational intelligence-based framework which can be used for the real-time estimation of optimum battery state-of-charge (SOC) trajectory in plug-in hybrid electric vehicles (PHEVs). The estimated SOC trajectory can be then employed for an intelligent power management to significantly improve the fuel economy of the vehicle. The devised intelligent SOC trajectory builder takes advantage of the upcoming route information preview to achieve the lowest possible total cost of electricity and fossil fuel. To reduce the complexity of real-time optimization, the authors propose an immune system-based clustering approach which allows categorizing the route information into a predefined number of segments. The intelligent real-time optimizer is also inspired on the basis of interactions in biological immune systems, and is called artificial immune algorithm (AIA). The objective function of the optimizer is derived from a computationally efficient artificial neural network (ANN) which is trained by a database obtained from a high-fidelity model of the vehicle built in the Autonomie software. The simulation results demonstrate that the integration of immune inspired clustering tool, AIA and ANN, will result in a powerful framework which can generate a near global optimum SOC trajectory for the baseline vehicle, that is, the Toyota Prius PHEV. The outcomes of the current investigation prove that by taking advantage of intelligent approaches, it is possible to design a computationally efficient and powerful SOC trajectory builder for the intelligent power management of PHEVs.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
Shared-resource computing for small research labs.
Ackerman, M J
1982-04-01
A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.
Calibration strategy and optics for ARGOS at the LBT
NASA Astrophysics Data System (ADS)
Schwab, Christian; Peter, Diethard; Aigner, Simon
2010-07-01
Effective calibration procedures play an important role for the efficiency and performance of astronomical instrumentation. We report on the calibration scheme for ARGOS, the Laser Guide Star (LGS) facility at the LBT. An artificial light source is used to feign the real laser beacons and perform extensive testing of the system, independent of the time of day and weather conditions, thereby greatly enhancing the time available for engineering. Fibre optics and computer generated holograms (CGHs) are used to generate the necessary wavefront. We present the optomechanical design, and discuss the expected accuracy, as well as tolerances in assembly and alignment.
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand
Lauer, Stephen A.; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E.; Salje, Henrik; Cummings, Derek A. T.; Lessler, Justin
2016-01-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making. PMID:27304062
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand.
Reich, Nicholas G; Lauer, Stephen A; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E; Salje, Henrik; Cummings, Derek A T; Lessler, Justin
2016-06-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making.
Use of cloud computing technology in natural hazard assessment and emergency management
NASA Astrophysics Data System (ADS)
Webley, P. W.; Dehn, J.
2015-12-01
During a natural hazard event, the most up-to-date data needs to be in the hands of those on the front line. Decision support system tools can be developed to provide access to pre-made outputs to quickly assess the hazard and potential risk. However, with the ever growing availability of new satellite data as well as ground and airborne data generated in real-time there is a need to analyze the large volumes of data in an easy-to-access and effective environment. With the growth in the use of cloud computing, where the analysis and visualization system can grow with the needs of the user, then these facilities can used to provide this real-time analysis. Think of a central command center uploading the data to the cloud compute system and then those researchers in-the-field connecting to a web-based tool to view the newly acquired data. New data can be added by any user and then viewed instantly by anyone else in the organization through the cloud computing interface. This provides the ideal tool for collaborative data analysis, hazard assessment and decision making. We present the rationale for developing a cloud computing systems and illustrate how this tool can be developed for use in real-time environments. Users would have access to an interactive online image analysis tool without the need for specific remote sensing software on their local system therefore increasing their understanding of the ongoing hazard and mitigate its impact on the surrounding region.
An Operationally Based Vision Assessment Simulator for Domes
NASA Technical Reports Server (NTRS)
Archdeacon, John; Gaska, James; Timoner, Samson
2012-01-01
The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
HPC enabled real-time remote processing of laparoscopic surgery
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.
2016-03-01
Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.
NASA Technical Reports Server (NTRS)
Clukey, Steven J.
1991-01-01
The real time Dynamic Data Acquisition and Processing System (DDAPS) is described which provides the capability for the simultaneous measurement of velocity, density, and total temperature fluctuations. The system of hardware and software is described in context of the wind tunnel environment. The DDAPS replaces both a recording mechanism and a separate data processing system. DDAPS receives input from hot wire anemometers. Amplifiers and filters condition the signals with computer controlled modules. The analog signals are simultaneously digitized and digitally recorded on disk. Automatic acquisition collects necessary calibration and environment data. Hot wire sensitivities are generated and applied to the hot wire data to compute fluctuations. The presentation of the raw and processed data is accomplished on demand. The interface to DDAPS is described along with the internal mechanisms of DDAPS. A summary of operations relevant to the use of the DDAPS is also provided.
Friedenberg, David A; Bouton, Chad E; Annetta, Nicholas V; Skomrock, Nicholas; Mingming Zhang; Schwemmer, Michael; Bockbrader, Marcia A; Mysiw, W Jerry; Rezai, Ali R; Bresler, Herbert S; Sharma, Gaurav
2016-08-01
Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.
Lee, Kyung-Min; Uhm, Gi-Soo; Cho, Jin-Hyoung; McNamara, James A.
2013-01-01
Objective The purpose of this study was to evaluate the effectiveness of the use of Reference Ear Plug (REP) during cone-beam computed tomography (CBCT) scan for the generation of lateral cephalograms from CBCT scan data. Methods Two CBCT scans were obtained from 33 adults. One CBCT scan was acquired using conventional methods, and the other scan was acquired with the use of REP. Virtual lateral cephalograms created from each CBCT image were traced and compared with tracings of the real cephalograms obtained from the same subject. Results CBCT scan with REP resulted in a smaller discrepancy between real and virtual cephalograms. In comparing the real and virtual cephalograms, no measurements significantly differed from real cephalogram values in case of CBCT scan with REP, whereas many measurements significantly differed in the case of CBCT scan without REP. Conclusion Measurements from CBCT-generated cephalograms are more similar to those from real cephalograms when REP are used during CBCT scan. Thus, the use of REP is suggested during CBCT scan to generate accurate virtual cephalograms from CBCT scan data. PMID:23671830
Online frequency estimation with applications to engine and generator sets
NASA Astrophysics Data System (ADS)
Manngård, Mikael; Böling, Jari M.
2017-07-01
Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.
Quantum random number generation for loophole-free Bell tests
NASA Astrophysics Data System (ADS)
Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar
2015-05-01
We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.
Real-time data flow and product generating for GNSS
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Caissy, Mark
2004-01-01
The last IGS workshop with the theme 'Towards Real-Time' resulted in the design of a prototype for real-time data and sharing within the IGS. A prototype real-time network is being established that will serve as a test bed for real-time activities within the IGS. We review the developments of the prototype and discuss some of the existing methods and related products of real-time GNSS systems. Recommendations are made concerning real-time data distribution and product generation.
NASA Technical Reports Server (NTRS)
Sainsbury-Carter, J. B.; Conaway, J. H.
1973-01-01
The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.
Real-time seismic monitoring and functionality assessment of a building
Celebi, M.; ,
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
Recent advances to obtain real - Time displacements for engineering applications
Celebi, M.
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream
Ying Wah, Teh
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753
A fast density-based clustering algorithm for real-time Internet of Things stream.
Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.
NASA Astrophysics Data System (ADS)
Bouchpan-Lerust-Juéry, L.
2007-08-01
Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.
Reconfigurable Optical Interconnections Via Dynamic Computer-Generated Holograms
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Zhou, Shao-Min (Inventor)
1996-01-01
A system is presented for optically providing one-to-many irregular interconnections, and strength-adjustable many-to-many irregular interconnections which may be provided with strengths (weights) w(sub ij) using multiple laser beams which address multiple holograms and means for combining the beams modified by the holograms to form multiple interconnections, such as a cross-bar switching network. The optical means for interconnection is based on entering a series of complex computer-generated holograms on an electrically addressed spatial light modulator for real-time reconfigurations, thus providing flexibility for interconnection networks for large-scale practical use. By employing multiple sources and holograms, the number of interconnection patterns achieved is increased greatly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, L.; Rao, N.D.
1983-04-01
This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less
Hasegawa, Hiroaki; Mihara, Yoshiyuki; Ino, Kenji; Sato, Jiro
2014-11-01
The purpose of this study was to evaluate the radiation dose reduction to patients and radiologists in computed tomography (CT) guided examinations for the thoracic region using CT fluoroscopy. Image quality evaluation of the real-time filtered back-projection (RT-FBP) images and the real-time adaptive iterative dose reduction (RT-AIDR) images was carried out on noise and artifacts that were considered to affect the CT fluoroscopy. The image standard deviation was improved in the fluoroscopy setting with less than 30 mA on 120 kV. With regard to the evaluation of artifact visibility and the amount generated by the needle attached to the chest phantom, there was no significant difference between the RT-FBP images with 120 kV, 20 mA and the RT-AIDR images with low-dose conditions (greater than 80 kV, 30 mA and less than 120 kV, 20 mA). The results suggest that it is possible to reduce the radiation dose by approximately 34% at the maximum using RT-AIDR while maintaining image quality equivalent to the RT-FBP images with 120 V, 20 mA.
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
A Distributed Computing Network for Real-Time Systems
1980-11-03
NUSC Tttchnical Docum&nt 5932 3 November 1980 A Distributed Computing N ~etwork for Real ·- Time Systems Gordon · E. Morrison Combat Control...megabit, 10 megabit, and 20 megabit networks. These values are well within the J state-of-the-art and are typical for real - time systems similar to
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery
Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang
2018-01-01
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.
Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang
2018-04-25
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.
Sensewheel: an adjunct to wheelchair skills training
Taylor, Stephen J.G.; Holloway, Catherine
2016-01-01
The purpose of this Letter was to investigate the influence of real-time verbal feedback to optimise push arc during over ground manual wheelchair propulsion. Ten healthy non-wheelchair users pushed a manual wheelchair for a distance of 25 m on level paving, initially with no feedback and then with real-time verbal feedback aimed at controlling push arc within a range of 85°–100°. The real-time feedback was provided by a physiotherapist walking behind the wheelchair, viewing real-time data on a tablet personal computer received from the Sensewheel, a lightweight instrumented wheelchair wheel. The real-time verbal feedback enabled the participants to significantly increase their push arc. This increase in push arc resulted in a non-significant reduction in push rate and a significant increase in peak force application. The intervention enabled participants to complete the task at a higher mean velocity using significantly fewer pushes. This was achieved via a significant increase in the power generated during the push phase. This Letter identifies that a lightweight instrumented wheelchair wheel such as the Sensewheel is a useful adjunct to wheelchair skills training. Targeting the optimisation of push arc resulted in beneficial changes in propulsion technique. PMID:28008362
A Distributed Computing Network for Real-Time Systems.
1980-11-03
7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 -1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG
Real Time Computer Graphics From Body Motion
NASA Astrophysics Data System (ADS)
Fisher, Scott; Marion, Ann
1983-10-01
This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.
Real-Time, Sensor-Based Computing in the Laboratory.
ERIC Educational Resources Information Center
Badmus, O. O.; And Others
1996-01-01
Demonstrates the importance of Real-Time, Sensor-Based (RTSB) computing and how it can be easily and effectively integrated into university student laboratories. Describes the experimental processes, the process instrumentation and process-computer interface, the computer and communications systems, and typical software. Provides much technical…
Detecting dominant motion patterns in crowds of pedestrians
NASA Astrophysics Data System (ADS)
Saqib, Muhammad; Khan, Sultan Daud; Blumenstein, Michael
2017-02-01
As the population of the world increases, urbanization generates crowding situations which poses challenges to public safety and security. Manual analysis of crowded situations is a tedious job and usually prone to errors. In this paper, we propose a novel technique of crowd analysis, the aim of which is to detect different dominant motion patterns in real-time videos. A motion field is generated by computing the dense optical flow. The motion field is then divided into blocks. For each block, we adopt an Intra-clustering algorithm for detecting different flows within the block. Later on, we employ Inter-clustering for clustering the flow vectors among different blocks. We evaluate the performance of our approach on different real-time videos. The experimental results show that our proposed method is capable of detecting distinct motion patterns in crowded videos. Moreover, our algorithm outperforms state-of-the-art methods.
Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera
NASA Astrophysics Data System (ADS)
Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi
2015-12-01
This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.
Adaptation of Control Center Software to Commerical Real-Time Display Applications
NASA Technical Reports Server (NTRS)
Collier, Mark D.
1994-01-01
NASA-Marshall Space Flight Center (MSFC) is currently developing an enhanced Huntsville Operation Support Center (HOSC) system designed to support multiple spacecraft missions. The Enhanced HOSC is based upon a distributed computing architecture using graphic workstation hardware and industry standard software including POSIX, X Windows, Motif, TCP/IP, and ANSI C. Southwest Research Institute (SwRI) is currently developing a prototype of the Display Services application for this system. Display Services provides the capability to generate and operate real-time data-driven graphic displays. This prototype is a highly functional application designed to allow system end users to easily generate complex data-driven displays. The prototype is easy to use, flexible, highly functional, and portable. Although this prototype is being developed for NASA-MSFC, the general-purpose real-time display capability can be reused in similar mission and process control environments. This includes any environment depending heavily upon real-time data acquisition and display. Reuse of the prototype will be a straight-forward transition because the prototype is portable, is designed to add new display types easily, has a user interface which is separated from the application code, and is very independent of the specifics of NASA-MSFC's system. Reuse of this prototype in other environments is a excellent alternative to creation of a new custom application, or for environments with a large number of users, to purchasing a COTS package.
Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B
2010-06-01
To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.
Williams, Ian; Constandinou, Timothy G.
2014-01-01
Accurate models of proprioceptive neural patterns could 1 day play an important role in the creation of an intuitive proprioceptive neural prosthesis for amputees. This paper looks at combining efficient implementations of biomechanical and proprioceptor models in order to generate signals that mimic human muscular proprioceptive patterns for future experimental work in prosthesis feedback. A neuro-musculoskeletal model of the upper limb with 7 degrees of freedom and 17 muscles is presented and generates real time estimates of muscle spindle and Golgi Tendon Organ neural firing patterns. Unlike previous neuro-musculoskeletal models, muscle activation and excitation levels are unknowns in this application and an inverse dynamics tool (static optimization) is integrated to estimate these variables. A proprioceptive prosthesis will need to be portable and this is incompatible with the computationally demanding nature of standard biomechanical and proprioceptor modeling. This paper uses and proposes a number of approximations and optimizations to make real time operation on portable hardware feasible. Finally technical obstacles to mimicking natural feedback for an intuitive proprioceptive prosthesis, as well as issues and limitations with existing models, are identified and discussed. PMID:25009463
Real-time liquid-crystal atmosphere turbulence simulator with graphic processing unit.
Hu, Lifa; Xuan, Li; Li, Dayu; Cao, Zhaoliang; Mu, Quanquan; Liu, Yonggang; Peng, Zenghui; Lu, Xinghai
2009-04-27
To generate time-evolving atmosphere turbulence in real time, a phase-generating method for our liquid-crystal (LC) atmosphere turbulence simulator (ATS) is derived based on the Fourier series (FS) method. A real matrix expression for generating turbulence phases is given and calculated with a graphic processing unit (GPU), the GeForce 8800 Ultra. A liquid crystal on silicon (LCOS) with 256x256 pixels is used as the turbulence simulator. The total time to generate a turbulence phase is about 7.8 ms for calculation and readout with the GPU. A parallel processing method of calculating and sending a picture to the LCOS is used to improve the simulating speed of our LC ATS. Therefore, the real-time turbulence phase-generation frequency of our LC ATS is up to 128 Hz. To our knowledge, it is the highest speed used to generate a turbulence phase in real time.
Next Generation Real-Time Systems: Investigating the Potential of Partial-Solution Tasks.
1994-12-01
insufficient for dealing with the complexities of next-generation real - time systems . New methods of intelligent control must be developed for guaranteeing...on-time task completion for real - time systems that are faced with unpredictable and dynamically changing requirements. Implementing real-time...tasks by experimentally measuring the change in performance of 11 simulated real - time systems when converted from all-or-nothing tasks to partial
Synchrophasor-Assisted Prediction of Stability/Instability of a Power System
NASA Astrophysics Data System (ADS)
Saha Roy, Biman Kumar; Sinha, Avinash Kumar; Pradhan, Ashok Kumar
2013-05-01
This paper presents a technique for real-time prediction of stability/instability of a power system based on synchrophasor measurements obtained from phasor measurement units (PMUs) at generator buses. For stability assessment the technique makes use of system severity indices developed using bus voltage magnitude obtained from PMUs and generator electrical power. Generator power is computed using system information and PMU information like voltage and current phasors obtained from PMU. System stability/instability is predicted when the indices exceeds a threshold value. A case study is carried out on New England 10-generator, 39-bus system to validate the performance of the technique.
Compact time- and space-integrating SAR processor: performance analysis
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.; Christensen, Marc P.
1995-06-01
Progress made during the previous 12 months toward the fabrication and test of a flight demonstration prototype of the acousto-optic time- and space-integrating real-time SAR image formation processor is reported. Compact, rugged, and low-power analog optical signal processing techniques are used for the most computationally taxing portions of the SAR imaging problem to overcome the size and power consumption limitations of electronic approaches. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported for this year include tests of a laboratory version of the RAPID SAR concept on phase history data generated from real SAR high-resolution imagery; a description of the new compact 2D acousto-optic scanner that has a 2D space bandwidth product approaching 106 sports, specified and procured for NEOS Technologies during the last year; and a design and layout of the optical module portion of the flight-worthy prototype.
Spectral analysis method and sample generation for real time visualization of speech
NASA Astrophysics Data System (ADS)
Hobohm, Klaus
A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.
Advanced computer architecture for large-scale real-time applications.
DOT National Transportation Integrated Search
1973-04-01
Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...
AER synthetic generation in hardware for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Linares-Barranco, Alejandro; Linares-Barranco, Bernabe; Jimenez-Moreno, Gabriel; Civit-Balcells, Anton
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) convert conventional frame-based video stream in the computer into AER and inject it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. This paper addresses the problem of converting, in a computer, a conventional frame-based video stream into the spike event based representation AER. There exist several proposed software methods for synthetic generation of AER for bio-inspired systems. This paper presents a hardware implementation for one method, which is based on Linear-Feedback-Shift-Register (LFSR) pseudo-random number generation. The sequence of events generated by this hardware, which follows a Poisson distribution like a biological neuron, has been reconstructed using two AER integrator cells. The error of reconstruction for a set of images that produces different traffic loads of event in the AER bus is used as evaluation criteria. A VHDL description of the method, that includes the Xilinx PCI Core, has been implemented and tested using a general purpose PCI-AER board. This PCI-AER board has been developed by authors, and uses a Spartan II 200 FPGA. This system for AER Synthetic Generation is capable of transforming frames of 64x64 pixels, received through a standard computer PCI bus, at a frame rate of 25 frames per second, producing spike events at a peak rate of 107 events per second.
Atmospheric optical calibration system
Hulstrom, Roland L.; Cannon, Theodore W.
1988-01-01
An atmospheric optical calibration system is provided to compare actual atmospheric optical conditions to standard atmospheric optical conditions on the basis of aerosol optical depth, relative air mass, and diffuse horizontal skylight to global horizontal photon flux ratio. An indicator can show the extent to which the actual conditions vary from standard conditions. Aerosol scattering and absorption properties, diffuse horizontal skylight to global horizontal photon flux ratio, and precipitable water vapor determined on a real-time basis for optical and pressure measurements are also used to generate a computer spectral model and for correcting actual performance response of a photovoltaic device to standard atmospheric optical condition response on a real-time basis as the device is being tested in actual outdoor conditions.
NASA Technical Reports Server (NTRS)
Scaffidi, C. A.; Stocklin, F. J.; Feldman, M. B.
1971-01-01
An L-band telemetry system designed to provide the capability of near-real-time processing of calibration data is described. The system also provides the capability of performing computerized spacecraft simulations, with the aircraft as a data source, and evaluating the network response. The salient characteristics of a telemetry analysis and simulation program (TASP) are discussed, together with the results of TASP testing. The results of the L-band system testing have successfully demonstrated the capability of near-real-time processing of telemetry test data, the control of the ground-received signal to within + or - 0.5 db, and the computer generation of test signals.
Computation offloading for real-time health-monitoring devices.
Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid
2016-08-01
Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.
Application of technology developed for flight simulation at NASA. Langley Research Center
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1991-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.
Distributed and collaborative synthetic environments
NASA Technical Reports Server (NTRS)
Bajaj, Chandrajit L.; Bernardini, Fausto
1995-01-01
Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.
Controlling total spot power from holographic laser by superimposing a binary phase grating.
Liu, Xiang; Zhang, Jian; Gan, Yu; Wu, Liying
2011-04-25
By superimposing a tunable binary phase grating with a conventional computer-generated hologram, the total power of multiple holographic 3D spots can be easily controlled by changing the phase depth of grating with high accuracy to a random power value for real-time optical manipulation without extra power loss. Simulation and experiment results indicate that a resolution of 0.002 can be achieved at a lower time cost for normalized total spot power.
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Arranging computer architectures to create higher-performance controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1988-01-01
Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.
Interactions with Virtual People: Do Avatars Dream of Digital Sheep?. Chapter 6
NASA Technical Reports Server (NTRS)
Slater, Mel; Sanchez-Vives, Maria V.
2007-01-01
This paper explores another form of artificial entity, ones without physical embodiment. We refer to virtual characters as the name for a type of interactive object that have become familiar in computer games and within virtual reality applications. We refer to these as avatars: three-dimensional graphical objects that are in more-or-less human form which can interact with humans. Sometimes such avatars will be representations of real-humans who are interacting together within a shared networked virtual environment, other times the representations will be of entirely computer generated characters. Unlike other authors, who reserve the term agent for entirely computer generated characters and avatars for virtual embodiments of real people; the same term here is used for both. This is because avatars and agents are on a continuum. The question is where does their behaviour originate? At the extremes the behaviour is either completely computer generated or comes only from tracking of a real person. However, not every aspect of a real person can be tracked every eyebrow move, every blink, every breath rather real tracking data would be supplemented by inferred behaviours which are programmed based on the available information as to what the real human is doing and her/his underlying emotional and psychological state. Hence there is always some programmed behaviour it is only a matter of how much. In any case the same underlying problem remains how can the human character be portrayed in such a manner that its actions are believable and have an impact on the real people with whom it interacts? This paper has three main parts. In the first part we will review some evidence that suggests that humans react with appropriate affect in their interactions with virtual human characters, or with other humans who are represented as avatars. This is so in spite of the fact that the representational fidelity is relatively low. Our evidence will be from the realm of psychotherapy, where virtual social situations are created that do test whether people react appropriately within these situations. We will also consider some experiments on face-to-face virtual communications between people in the same shared virtual environments. The second part will try to give some clues about why this might happen, taking into account modern theories of perception from neuroscience. The third part will include some speculations about the future developments of the relationship between people and virtual people. We will suggest that a more likely scenario than the world becoming populated by physically embodied virtual people (robots, androids) is that in the relatively near future we will interact more and more in our everyday lives with virtual people- bank managers, shop assistants, instructors, and so on. What is happening in the movies with computer graphic generated individuals and entire crowds may move into the space of everyday life.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Low-cost real-time infrared scene generation for image projection and signal injection
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; King, David E.; Bowden, Mark H.
1998-07-01
As cost becomes an increasingly important factor in the development and testing of Infrared sensors and flight computer/processors, the need for accurate hardware-in-the- loop (HWIL) simulations is critical. In the past, expensive and complex dedicated scene generation hardware was needed to attain the fidelity necessary for accurate testing. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost-effective replacements for dedicated scene generators. These new scene generators are mainly constructed from commercial-off-the-shelf (COTS) hardware and software components. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC), researchers have developed such a dynamic IR scene generator (IRSG) built around COTS hardware and software. The IRSG is used to provide dynamic inputs to an IR scene projector for in-band seeker testing and for direct signal injection into the seeker or processor electronics. AMCOM MRDEC has developed a second generation IRSG, namely IRSG2, using the latest Silicon Graphics Incorporated (SGI) Onyx2 with Infinite Reality graphics. As reported in previous papers, the SGI Onyx Reality Engine 2 is the platform of the original IRSG that is now referred to as IRSG1. IRSG1 has been in operation and used daily for the past three years on several IR projection and signal injection HWIL programs. Using this second generation IRSG, frame rates have increased from 120 Hz to 400 Hz and intensity resolution from 12 bits to 16 bits. The key features of the IRSGs are real time missile frame rates and frame sizes, dynamic missile-to-target(s) viewpoint updated each frame in real-time by a six-degree-of- freedom (6DOF) system under test (SUT) simulation, multiple dynamic objects (e.g. targets, terrain/background, countermeasures, and atmospheric effects), latency compensation, point-to-extended source anti-aliased targets, and sensor modeling effects. This paper provides a comparison between the IRSG1 and IRSG2 systems and focuses on the IRSG software, real time features, and database development tools.
A multitasking, multisinked, multiprocessor data acquisition front end
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, R.; Au, R.; Molen, A.V.
1989-10-01
The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
Trajectory generation for an on-road autonomous vehicle
NASA Astrophysics Data System (ADS)
Horst, John; Barbera, Anthony
2006-05-01
We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.
ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)
2018-01-18
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING
The expanded role of computers in Space Station Freedom real-time operations
NASA Technical Reports Server (NTRS)
Crawford, R. Paul; Cannon, Kathleen V.
1990-01-01
The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources.
Versatile analog pulse height computer performs real-time arithmetic operations
NASA Technical Reports Server (NTRS)
Brenner, R.; Strauss, M. G.
1967-01-01
Multipurpose analog pulse height computer performs real-time arithmetic operations on relatively fast pulses. This computer can be used for identification of charged particles, pulse shape discrimination, division of signals from position sensitive detectors, and other on-line data reduction techniques.
Intelligent robots for planetary exploration and construction
NASA Technical Reports Server (NTRS)
Albus, James S.
1992-01-01
Robots capable of practical applications in planetary exploration and construction will require realtime sensory-interactive goal-directed control systems. A reference model architecture based on the NIST Real-time Control System (RCS) for real-time intelligent control systems is suggested. RCS partitions the control problem into four basic elements: behavior generation (or task decomposition), world modeling, sensory processing, and value judgment. It clusters these elements into computational nodes that have responsibility for specific subsystems, and arranges these nodes in hierarchical layers such that each layer has characteristic functionality and timing. Planetary exploration robots should have mobility systems that can safely maneuver over rough surfaces at high speeds. Walking machines and wheeled vehicles with dynamic suspensions are candidates. The technology of sensing and sensory processing has progressed to the point where real-time autonomous path planning and obstacle avoidance behavior is feasible. Map-based navigation systems will support long-range mobility goals and plans. Planetary construction robots must have high strength-to-weight ratios for lifting and positioning tools and materials in six degrees-of-freedom over large working volumes. A new generation of cable-suspended Stewart platform devices and inflatable structures are suggested for lifting and positioning materials and structures, as well as for excavation, grading, and manipulating a variety of tools and construction machinery.
The GPU implementation of micro - Doppler period estimation
NASA Astrophysics Data System (ADS)
Yang, Liyuan; Wang, Junling; Bi, Ran
2018-03-01
Aiming at the problem that the computational complexity and the deficiency of real-time of the wideband radar echo signal, a program is designed to improve the performance of real-time extraction of micro-motion feature in this paper based on the CPU-GPU heterogeneous parallel structure. Firstly, we discuss the principle of the micro-Doppler effect generated by the rolling of the scattering points on the orbiting satellite, analyses how to use Kalman filter to compensate the translational motion of tumbling satellite and how to use the joint time-frequency analysis and inverse Radon transform to extract the micro-motion features from the echo after compensation. Secondly, the advantages of GPU in terms of real-time processing and the working principle of CPU-GPU heterogeneous parallelism are analysed, and a program flow based on GPU to extract the micro-motion feature from the radar echo signal of rolling satellite is designed. At the end of the article the results of extraction are given to verify the correctness of the program and algorithm.
NASA Astrophysics Data System (ADS)
Pavlovic, Radenko; Chen, Jack; Beaulieu, Paul-Andre; Anselmp, David; Gravel, Sylvie; Moran, Mike; Menard, Sylvain; Davignon, Didier
2014-05-01
A wildfire emissions processing system has been developed to incorporate near-real-time emissions from wildfires and large prescribed burns into Environment Canada's real-time GEM-MACH air quality (AQ) forecast system. Since the GEM-MACH forecast domain covers Canada and most of the U.S.A., including Alaska, fire location information is needed for both of these large countries. During AQ model runs, emissions from individual fire sources are injected into elevated model layers based on plume-rise calculations and then transport and chemistry calculations are performed. This "on the fly" approach to the insertion of the fire emissions provides flexibility and efficiency since on-line meteorology is used and computational overhead in emissions pre-processing is reduced. GEM-MACH-FireWork, an experimental wildfire version of GEM-MACH, was run in real-time mode for the summers of 2012 and 2013 in parallel with the normal operational version. 48-hour forecasts were generated every 12 hours (at 00 and 12 UTC). Noticeable improvements in the AQ forecasts for PM2.5 were seen in numerous regions where fire activity was high. Case studies evaluating model performance for specific regions and computed objective scores will be included in this presentation. Using the lessons learned from the last two summers, Environment Canada will continue to work towards the goal of incorporating near-real-time intermittent wildfire emissions into the operational air quality forecast system.
Communicability across evolving networks.
Grindrod, Peter; Parsons, Mark C; Higham, Desmond J; Estrada, Ernesto
2011-04-01
Many natural and technological applications generate time-ordered sequences of networks, defined over a fixed set of nodes; for example, time-stamped information about "who phoned who" or "who came into contact with who" arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time's arrow is captured naturally through the noncommutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.
Hard-real-time resource management for autonomous spacecraft
NASA Technical Reports Server (NTRS)
Gat, E.
2000-01-01
This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.
Real time software for a heat recovery steam generator control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdes, R.; Delgadillo, M.A.; Chavez, R.
1995-12-31
This paper is addressed to the development and successful implementation of a real time software for the Heat Recovery Steam Generator (HRSG) control system of a Combined Cycle Power Plant. The real time software for the HRSG control system physically resides in a Control and Acquisition System (SAC) which is a component of a distributed control system (DCS). The SAC is a programmable controller. The DCS installed at the Gomez Palacio power plant in Mexico accomplishes the functions of logic, analog and supervisory control. The DCS is based on microprocessors and the architecture consists of workstations operating as a Man-Machinemore » Interface (MMI), linked to SAC controllers by means of a communication system. The HRSG real time software is composed of an operating system, drivers, dedicated computer program and application computer programs. The operating system used for the development of this software was the MultiTasking Operating System (MTOS). The application software developed at IIE for the HRSG control system basically consisted of a set of digital algorithms for the regulation of the main process variables at the HRSG. By using the multitasking feature of MTOS, the algorithms are executed pseudo concurrently. In this way, the applications programs continuously use the resources of the operating system to perform their functions through a uniform service interface. The application software of the HRSG consist of three tasks, each of them has dedicated responsibilities. The drivers were developed for the handling of hardware resources of the SAC controller which in turn allows the signals acquisition and data communication with a MMI. The dedicated programs were developed for hardware diagnostics, task initializations, access to the data base and fault tolerance. The application software and the dedicated software for the HRSG control system was developed using C programming language due to compactness, portability and efficiency.« less
Development of On-line Wildfire Emissions for the Operational Canadian Air Quality Forecast System
NASA Astrophysics Data System (ADS)
Pavlovic, R.; Menard, S.; Chen, J.; Anselmo, D.; Paul-Andre, B.; Gravel, S.; Moran, M. D.; Davignon, D.
2013-12-01
An emissions processing system has been developed to incorporate near-real-time emissions from wildfires and large prescribed burns into Environment Canada's real-time GEM-MACH air quality (AQ) forecast system. Since the GEM-MACH forecast domain covers Canada and most of the USA, including Alaska, fire location information is needed for both of these large countries. Near-real-time satellite data are obtained and processed separately for the two countries for organizational reasons. Fire location and fuel consumption data for Canada are provided by the Canadian Forest Service's Canadian Wild Fire Information System (CWFIS) while fire location and emissions data for the U.S. are provided by the SMARTFIRE (Satellite Mapping Automated Reanalysis Tool for Fire Incident Reconciliation) system via the on-line BlueSky Gateway. During AQ model runs, emissions from individual fire sources are injected into elevated model layers based on plume-rise calculations and then transport and chemistry calculations are performed. This 'on the fly' approach to the insertion of emissions provides greater flexibility since on-line meteorology is used and reduces computational overhead in emission pre-processing. An experimental wildfire version of GEM-MACH was run in real-time mode for the summers of 2012 and 2013. 48-hour forecasts were generated every 12 hours (at 00 and 12 UTC). Noticeable improvements in the AQ forecasts for PM2.5 were seen in numerous regions where fire activity was high. Case studies evaluating model performance for specific regions, computed objective scores, and subjective evaluations by AQ forecasters will be included in this presentation. Using the lessons learned from the last two summers, Environment Canada will continue to work towards the goal of incorporating near-real-time intermittent wildfire emissions within the operational air quality forecast system.
A reflection on grid generation in the 90s: Trends, needs, and influences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, J.F.
In a book called Art and Physics: Parallel Visions in Space, Time & Light, I read that Newton made reference to {open_quotes}the glory of geometry{close_quotes}. This book goes on to point out that the development of perspective was a milestone in the history of art, suddenly opening the 2D canvas to the 3D world. In fact, Renaissance parents urged their children to become professional perspectivists because the skill was in such demand. Grid generation has analogously moved computational simulation from squares and circles into the real world. And, although I didn`t suggest the field to my kids, there has beenmore » some demand for a few such folks. But our measure of real success is actually in reducing that demand.« less
On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic
Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
ERIC Educational Resources Information Center
Snyder, Herbert; Kurtze, Douglas
1992-01-01
Discusses the use of chaos, or nonlinear dynamics, for investigating computer-mediated communication. A comparison between real, human-generated data from a computer network and similarly constructed random-generated data is made, and mathematical procedures for determining chaos are described. (seven references) (LRW)
Electro-optical processing of phased array data
NASA Technical Reports Server (NTRS)
Casasent, D.
1973-01-01
An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.
Forecasting Occurrences of Activities.
Minor, Bryan; Cook, Diane J
2017-07-01
While activity recognition has been shown to be valuable for pervasive computing applications, less work has focused on techniques for forecasting the future occurrence of activities. We present an activity forecasting method to predict the time that will elapse until a target activity occurs. This method generates an activity forecast using a regression tree classifier and offers an advantage over sequence prediction methods in that it can predict expected time until an activity occurs. We evaluate this algorithm on real-world smart home datasets and provide evidence that our proposed approach is most effective at predicting activity timings.
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
NASA Technical Reports Server (NTRS)
Stieber, Michael E.
1989-01-01
A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.
Resource Limitation Issues In Real-Time Intelligent Systems
NASA Astrophysics Data System (ADS)
Green, Peter E.
1986-03-01
This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
NASA Astrophysics Data System (ADS)
Boakye-Boateng, Nasir Abdulai
The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.
2008-01-01
This report describes the design of the test articles and monitoring systems developed to characterize the response of a fault-tolerant computer communication system when stressed beyond the theoretical limits for guaranteed correct performance. A high-intensity radiated electromagnetic field (HIRF) environment was selected as the means of injecting faults, as such environments are known to have the potential to cause arbitrary and coincident common-mode fault manifestations that can overwhelm redundancy management mechanisms. The monitors generate stimuli for the systems-under-test (SUTs) and collect data in real-time on the internal state and the response at the external interfaces. A real-time health assessment capability was developed to support the automation of the test. A detailed description of the nature and structure of the collected data is included. The goal of the report is to provide insight into the design and operation of these systems, and to serve as a reference document for use in post-test analyses.
A digital waveguide-based approach for Clavinet modeling and synthesis
NASA Astrophysics Data System (ADS)
Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan
2013-12-01
The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.
Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors
Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal
2014-01-01
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782
Mobile autonomous robotic apparatus for radiologic characterization
Dudar, Aed M.; Ward, Clyde R.; Jones, Joel D.; Mallet, William R.; Harpring, Larry J.; Collins, Montenius X.; Anderson, Erin K.
1999-01-01
A mobile robotic system that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console.
Mobile autonomous robotic apparatus for radiologic characterization
Dudar, A.M.; Ward, C.R.; Jones, J.D.; Mallet, W.R.; Harpring, L.J.; Collins, M.X.; Anderson, E.K.
1999-08-10
A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs.
A Framework to Improve Energy Efficient Behaviour at Home through Activity and Context Monitoring
García, Óscar; Alonso, Ricardo S.; Corchado, Juan M.
2017-01-01
Real-time Localization Systems have been postulated as one of the most appropriated technologies for the development of applications that provide customized services. These systems provide us with the ability to locate and trace users and, among other features, they help identify behavioural patterns and habits. Moreover, the implementation of policies that will foster energy saving in homes is a complex task that involves the use of this type of systems. Although there are multiple proposals in this area, the implementation of frameworks that combine technologies and use Social Computing to influence user behaviour have not yet reached any significant savings in terms of energy. In this work, the CAFCLA framework (Context-Aware Framework for Collaborative Learning Applications) is used to develop a recommendation system for home users. The proposed system integrates a Real-Time Localization System and Wireless Sensor Networks, making it possible to develop applications that work under the umbrella of Social Computing. The implementation of an experimental use case aided efficient energy use, achieving savings of 17%. Moreover, the conducted case study pointed to the possibility of attaining good energy consumption habits in the long term. This can be done thanks to the system’s real time and historical localization, tracking and contextual data, based on which customized recommendations are generated. PMID:28758987
LIBRARY INFORMATION PROCESSING USING AN ON-LINE, REAL-TIME COMPUTER SYSTEM.
ERIC Educational Resources Information Center
HOLZBAUR, FREDERICK W.; FARRIS, EUGENE H.
DIRECT MAN-MACHINE COMMUNICATION IS NOW POSSIBLE THROUGH ON-LINE, REAL-TIME TYPEWRITER TERMINALS DIRECTLY CONNECTED TO COMPUTERS. THESE TERMINAL SYSTEMS PERMIT THE OPERATOR, WHETHER ORDER CLERK, CATALOGER, REFERENCE LIBRARIAN OR TYPIST, TO INTERACT WITH THE COMPUTER IN MANIPULATING DATA STORED WITHIN IT. THE IBM ADMINISTRATIVE TERMINAL SYSTEM…
Simulator for Microlens Planet Surveys
NASA Astrophysics Data System (ADS)
Ipatov, Sergei I.; Horne, Keith; Alsubai, Khalid A.; Bramich, Daniel M.; Dominik, Martin; Hundertmark, Markus P. G.; Liebig, Christine; Snodgrass, Colin D. B.; Street, Rachel A.; Tsapras, Yiannis
2014-04-01
We summarize the status of a computer simulator for microlens planet surveys. The simulator generates synthetic light curves of microlensing events observed with specified networks of telescopes over specified periods of time. Particular attention is paid to models for sky brightness and seeing, calibrated by fitting to data from the OGLE survey and RoboNet observations in 2011. Time intervals during which events are observable are identified by accounting for positions of the Sun and the Moon, and other restrictions on telescope pointing. Simulated observations are then generated for an algorithm that adjusts target priorities in real time with the aim of maximizing planet detection zone area summed over all the available events. The exoplanet detection capability of observations was compared for several telescopes.
High-speed real-time animated displays on the ADAGE (trademark) RDS 3000 raster graphics system
NASA Technical Reports Server (NTRS)
Kahlbaum, William M., Jr.; Ownbey, Katrina L.
1989-01-01
Techniques which may be used to increase the animation update rate of real-time computer raster graphic displays are discussed. They were developed on the ADAGE RDS 3000 graphic system in support of the Advanced Concepts Simulator at the NASA Langley Research Center. These techniques involve the use of a special purpose parallel processor, for high-speed character generation. The description of the parallel processor includes the Barrel Shifter which is part of the hardware and is the key to the high-speed character rendition. The final result of this total effort was a fourfold increase in the update rate of an existing primary flight display from 4 to 16 frames per second.
Atmospheric optical calibration system
Hulstrom, R.L.; Cannon, T.W.
1988-10-25
An atmospheric optical calibration system is provided to compare actual atmospheric optical conditions to standard atmospheric optical conditions on the basis of aerosol optical depth, relative air mass, and diffuse horizontal skylight to global horizontal photon flux ratio. An indicator can show the extent to which the actual conditions vary from standard conditions. Aerosol scattering and absorption properties, diffuse horizontal skylight to global horizontal photon flux ratio, and precipitable water vapor determined on a real-time basis for optical and pressure measurements are also used to generate a computer spectral model and for correcting actual performance response of a photovoltaic device to standard atmospheric optical condition response on a real-time basis as the device is being tested in actual outdoor conditions. 7 figs.
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Atmospheric optical calibration system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulstrom, R.L.; Cannon, T.W.
1988-10-25
An atmospheric optical calibration system is provided to compare actual atmospheric optical conditions to standard atmospheric optical conditions on the basis of aerosol optical depth, relative air mass, and diffuse horizontal skylight to global horizontal photon flux ratio. An indicator can show the extent to which the actual conditions vary from standard conditions. Aerosol scattering and absorption properties, diffuse horizontal skylight to global horizontal photon flux ratio, and precipitable water vapor determined on a real-time basis for optical and pressure measurements are also used to generate a computer spectral model and for correcting actual performance response of a photovoltaic devicemore » to standard atmospheric optical condition response on a real-time basis as the device is being tested in actual outdoor conditions. 7 figs.« less
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
More About Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
Edmonds, Iarina
2007-01-01
A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.
Large holographic 3D display for real-time computer-generated holography
NASA Astrophysics Data System (ADS)
Häussler, R.; Leister, N.; Stolle, H.
2017-06-01
SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.
A preliminary design for flight testing the FINDS algorithm
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1986-01-01
This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.
NASA Astrophysics Data System (ADS)
Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2016-09-01
Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
A real-time digital computer program for the simulation of a single rotor helicopter
NASA Technical Reports Server (NTRS)
Houck, J. A.; Gibson, L. H.; Steinmetz, G. G.
1974-01-01
A computer program was developed for the study of a single-rotor helicopter on the Langley Research Center real-time digital simulation system. Descriptions of helicopter equations and data, program subroutines (including flow charts and listings), real-time simulation system routines, and program operation are included. Program usage is illustrated by standard check cases and a representative flight case.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
NASA Astrophysics Data System (ADS)
Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.
2016-10-01
This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.
Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares
NASA Astrophysics Data System (ADS)
Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen
2018-07-01
The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.
Wireless Sensor Network Metrics for Real-Time Systems
2009-05-20
to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate
NASA Astrophysics Data System (ADS)
Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno
2014-03-01
Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.
Real-Time Embedded High Performance Computing: Communications Scheduling.
1995-06-01
real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real
An Empirical Generative Framework for Computational Modeling of Language Acquisition
ERIC Educational Resources Information Center
Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-01-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…
Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.
Zander, Thorsten O; Krol, Laurens R; Birbaumer, Niels P; Gramann, Klaus
2016-12-27
The effectiveness of today's human-machine interaction is limited by a communication bottleneck as operators are required to translate high-level concepts into a machine-mandated sequence of instructions. In contrast, we demonstrate effective, goal-oriented control of a computer system without any form of explicit communication from the human operator. Instead, the system generated the necessary input itself, based on real-time analysis of brain activity. Specific brain responses were evoked by violating the operators' expectations to varying degrees. The evoked brain activity demonstrated detectable differences reflecting congruency with or deviations from the operators' expectations. Real-time analysis of this activity was used to build a user model of those expectations, thus representing the optimal (expected) state as perceived by the operator. Based on this model, which was continuously updated, the computer automatically adapted itself to the expectations of its operator. Further analyses showed this evoked activity to originate from the medial prefrontal cortex and to exhibit a linear correspondence to the degree of expectation violation. These findings extend our understanding of human predictive coding and provide evidence that the information used to generate the user model is task-specific and reflects goal congruency. This paper demonstrates a form of interaction without any explicit input by the operator, enabling computer systems to become neuroadaptive, that is, to automatically adapt to specific aspects of their operator's mindset. Neuroadaptive technology significantly widens the communication bottleneck and has the potential to fundamentally change the way we interact with technology.
Bacterial computing: a form of natural computing and its applications.
Lahoz-Beltra, Rafael; Navarro, Jorge; Marijuán, Pedro C
2014-01-01
The capability to establish adaptive relationships with the environment is an essential characteristic of living cells. Both bacterial computing and bacterial intelligence are two general traits manifested along adaptive behaviors that respond to surrounding environmental conditions. These two traits have generated a variety of theoretical and applied approaches. Since the different systems of bacterial signaling and the different ways of genetic change are better known and more carefully explored, the whole adaptive possibilities of bacteria may be studied under new angles. For instance, there appear instances of molecular "learning" along the mechanisms of evolution. More in concrete, and looking specifically at the time dimension, the bacterial mechanisms of learning and evolution appear as two different and related mechanisms for adaptation to the environment; in somatic time the former and in evolutionary time the latter. In the present chapter it will be reviewed the possible application of both kinds of mechanisms to prokaryotic molecular computing schemes as well as to the solution of real world problems.
Bacterial computing: a form of natural computing and its applications
Lahoz-Beltra, Rafael; Navarro, Jorge; Marijuán, Pedro C.
2014-01-01
The capability to establish adaptive relationships with the environment is an essential characteristic of living cells. Both bacterial computing and bacterial intelligence are two general traits manifested along adaptive behaviors that respond to surrounding environmental conditions. These two traits have generated a variety of theoretical and applied approaches. Since the different systems of bacterial signaling and the different ways of genetic change are better known and more carefully explored, the whole adaptive possibilities of bacteria may be studied under new angles. For instance, there appear instances of molecular “learning” along the mechanisms of evolution. More in concrete, and looking specifically at the time dimension, the bacterial mechanisms of learning and evolution appear as two different and related mechanisms for adaptation to the environment; in somatic time the former and in evolutionary time the latter. In the present chapter it will be reviewed the possible application of both kinds of mechanisms to prokaryotic molecular computing schemes as well as to the solution of real world problems. PMID:24723912
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.
2010-01-01
In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.
NASA Astrophysics Data System (ADS)
Burba, G. G.; Johnson, D.; Velgersdyk, M.; Beaty, K.; Forgione, A.; Begashaw, I.; Allyn, D.
2015-12-01
Significant increases in data generation and computing power in recent years have greatly improved spatial and temporal flux data coverage on multiple scales, from a single station to continental flux networks. At the same time, operating budgets for flux teams and stations infrastructure are getting ever more difficult to acquire and sustain. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are needed to effectively and efficiently handle the entire process. This would help maximize time dedicated to answering research questions, and minimize time and expenses spent on data processing, quality control and station management. Cross-sharing the stations with external institutions may also help leverage available funding, increase scientific collaboration, and promote data analyses and publications. FluxSuite, a new advanced tool combining hardware, software and web-service, was developed to address these specific demands. It automates key stages of flux workflow, minimizes day-to-day site management, and modernizes the handling of data flows: Each next-generation station measures all parameters needed for flux computations Field microcomputer calculates final fully-corrected flux rates in real time, including computation-intensive Fourier transforms, spectra, co-spectra, multiple rotations, stationarity, footprint, etc. Final fluxes, radiation, weather and soil data are merged into a single quality-controlled file Multiple flux stations are linked into an automated time-synchronized network Flux network manager, or PI, can see all stations in real time, including fluxes, supporting data, automated reports, and email alerts PI can assign rights, allow or restrict access to stations and data: selected stations can be shared via rights-managed access internally or with external institutions Researchers without stations could form "virtual networks" for specific projects by collaborating with PIs from different actual networks This presentation provides detailed examples of FluxSuite currently utilized by two large flux networks in China (National Academy of Sciences & Agricultural Academy of Sciences), and smaller networks with stations in the USA, Germany, Ireland, Malaysia and other locations around the globe.
Near real-time digital holographic microscope based on GPU parallel computing
NASA Astrophysics Data System (ADS)
Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan
2018-01-01
A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,
A multi-GPU real-time dose simulation software framework for lung radiotherapy.
Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A
2012-09-01
Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.
Epistasis analysis using artificial intelligence.
Moore, Jason H; Hill, Doug P
2015-01-01
Here we introduce artificial intelligence (AI) methodology for detecting and characterizing epistasis in genetic association studies. The ultimate goal of our AI strategy is to analyze genome-wide genetics data as a human would using sources of expert knowledge as a guide. The methodology presented here is based on computational evolution, which is a type of genetic programming. The ability to generate interesting solutions while at the same time learning how to solve the problem at hand distinguishes computational evolution from other genetic programming approaches. We provide a general overview of this approach and then present a few examples of its application to real data.
Real-time wideband cylindrical holographic surveillance system
Sheen, David M.; McMakin, Douglas L.; Hall, Thomas E.; Severtsen, Ronald H.
1999-01-01
A wideband holographic cylindrical surveillance system including a transceiver for generating a plurality of electromagnetic waves; antenna for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; the transceiver also receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; a computer for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and a display for displaying the processed information to determine nature of the target. The computer has instructions to apply Fast Fourier Transforms and obtain a three dimensional cylindrical image.
Real-time wideband holographic surveillance system
Sheen, David M.; Collins, H. Dale; Hall, Thomas E.; McMakin, Douglas L.; Gribble, R. Parks; Severtsen, Ronald H.; Prince, James M.; Reid, Larry D.
1996-01-01
A wideband holographic surveillance system including a transceiver for generating a plurality of electromagnetic waves; antenna for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; the transceiver also receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; a computer for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and a display for displaying the processed information to determine nature of the target. The computer has instructions to apply a three dimensional backward wave algorithm.
Real-time wideband holographic surveillance system
Sheen, D.M.; Collins, H.D.; Hall, T.E.; McMakin, D.L.; Gribble, R.P.; Severtsen, R.H.; Prince, J.M.; Reid, L.D.
1996-09-17
A wideband holographic surveillance system including a transceiver for generating a plurality of electromagnetic waves; antenna for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; the transceiver also receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; a computer for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and a display for displaying the processed information to determine nature of the target. The computer has instructions to apply a three dimensional backward wave algorithm. 28 figs.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Automating quantum experiment control
NASA Astrophysics Data System (ADS)
Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.
2017-03-01
The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Proffitt, Dennis R.
1992-01-01
Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Seldner, K.; Cwynar, D. S.
1977-01-01
A real time, hybrid computer simulation of a turbofan engine is described. Controls research programs involving that engine are supported by the simulation. The real time simulation is shown to match the steady state and transient performance of the engine over a wide range of flight conditions and power settings. The simulation equations, FORTRAN listing, and analog patching diagrams are included.
Time-recovering PCI-AER interface for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Paz-Vicente, R.; Linares-Barranco, A.; Cascado, D.; Vicente, S.; Jimenez, G.; Civit, A.
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) inject a sequence of events at some point of the AER structure. This is necessary for testing and debugging complex AER systems. This paper presents a PCI to AER interface, that dispatches a sequence of events received from the PCI bus with embedded timing information to establish when each event will be delivered. A set of specialized states machines has been introduced to recovery the possible time delays introduced by the asynchronous AER bus. On the input channel, the interface capture events assigning a timestamp and delivers them through the PCI bus to MATLAB applications. It has been implemented in real time hardware using VHDL and it has been tested in a PCI-AER board, developed by authors, that includes a Spartan II 200 FPGA. The demonstration hardware is currently capable to send and receive events at a peak rate of 8,3 Mev/sec, and a typical rate of 1 Mev/sec.
Dombert, Beate; Mokros, Andreas; Brückner, Eva; Schlegl, Verena; Antfolk, Jan; Bäckström, Anna; Zappalà, Angelo; Osterheider, Michael; Santtila, Pekka
2013-12-01
The implicit assessment of pedophilic sexual interest through viewing-time methods necessitates visual stimuli. There are grave ethical and legal concerns against using pictures of real children, however. The present report is a summary of findings on a new set of 108 computer-generated stimuli. The images vary in terms of gender (female/male), explicitness (naked/clothed), and physical maturity (prepubescent, pubescent, and adult) of the persons depicted. A series of three studies tested the internal and external validity of the picture set. Studies 1 and 2 yielded good-to-high estimates of observer agreement with regard to stimulus maturity levels by two methods (categorization and paired comparison). Study 3 extended these findings with regard to judgments made by convicted child sexual offenders.
Real-time optical image processing techniques
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1988-01-01
Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Dynamic Clamp in Cardiac and Neuronal Systems Using RTXI
Ortega, Francis A.; Butera, Robert J.; Christini, David J.; White, John A.; Dorval, Alan D.
2016-01-01
The injection of computer-simulated conductances through the dynamic clamp technique has allowed researchers to probe the intercellular and intracellular dynamics of cardiac and neuronal systems with great precision. By coupling computational models to biological systems, dynamic clamp has become a proven tool in electrophysiology with many applications, such as generating hybrid networks in neurons or simulating channelopathies in cardiomyocytes. While its applications are broad, the approach is straightforward: synthesizing traditional patch clamp, computational modeling, and closed-loop feedback control to simulate a cellular conductance. Here, we present two example applications: artificial blocking of the inward rectifier potassium current in a cardiomyocyte and coupling of a biological neuron to a virtual neuron through a virtual synapse. The design and implementation of the necessary software to administer these dynamic clamp experiments can be difficult. In this chapter, we provide an overview of designing and implementing a dynamic clamp experiment using the Real-Time eXperiment Interface (RTXI), an open- source software system tailored for real-time biological experiments. We present two ways to achieve this using RTXI’s modular format, through the creation of a custom user-made module and through existing modules found in RTXI’s online library. PMID:25023319
Zamani, Majid; Demosthenous, Andreas
2014-07-01
Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.
NASA Astrophysics Data System (ADS)
Nagel, David J.
2000-11-01
The coordinated exploitation of modern communication, micro- sensor and computer technologies makes it possible to give global reach to our senses. Web-cameras for vision, web- microphones for hearing and web-'noses' for smelling, plus the abilities to sense many factors we cannot ordinarily perceive, are either available or will be soon. Applications include (1) determination of weather and environmental conditions on dense grids or over large areas, (2) monitoring of energy usage in buildings, (3) sensing the condition of hardware in electrical power distribution and information systems, (4) improving process control and other manufacturing, (5) development of intelligent terrestrial, marine, aeronautical and space transportation systems, (6) managing the continuum of routine security monitoring, diverse crises and military actions, and (7) medicine, notably the monitoring of the physiology and living conditions of individuals. Some of the emerging capabilities, such as the ability to measure remotely the conditions inside of people in real time, raise interesting social concerns centered on privacy issues. Methods for sensor data fusion and designs for human-computer interfaces are both crucial for the full realization of the potential of pervasive sensing. Computer-generated virtual reality, augmented with real-time sensor data, should be an effective means for presenting information from distributed sensors.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs.
Johnson, Micah K; Dale, Kevin; Avidan, Shai; Pfister, Hanspeter; Freeman, William T; Matusik, Wojciech
2011-09-01
Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.
Human sense utilization method on real-time computer graphics
NASA Astrophysics Data System (ADS)
Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao
1997-06-01
We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.
Boat, wake, and wave real-time simulation
NASA Astrophysics Data System (ADS)
Świerkowski, Leszek; Gouthas, Efthimios; Christie, Chad L.; Williams, Owen M.
2009-05-01
We describe the extension of our real-time scene generation software VIRSuite to include the dynamic simulation of small boats and their wakes within an ocean environment. Extensive use has been made of the programmabilty available in the current generation of GPUs. We have demonstrated that real-time simulation is feasible, even including such complexities as dynamical calculation of the boat motion, wake generation and calculation of an FFTgenerated sea state.
Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning
DeWitt, Don; Johnson-Williams, Nathan G.; Miyaoka, Robert S.; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K.; Hauck, Scott
2010-01-01
We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 106 floating-point operations per event for an exhaustive search to < 5 × 103 integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135
Random graph models for dynamic networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.
2017-10-01
Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.
Ghoshhajra, Brian B; Takx, Richard A P; Stone, Luke L; Girard, Erin E; Brilakis, Emmanouil S; Lombardi, William L; Yeh, Robert W; Jaffer, Farouc A
2017-06-01
The purpose of this study was to demonstrate the feasibility of real-time fusion of coronary computed tomography angiography (CTA) centreline and arterial wall calcification with x-ray fluoroscopy during chronic total occlusion (CTO) percutaneous coronary intervention (PCI). Patients undergoing CTO PCI were prospectively enrolled. Pre-procedural CT scans were integrated with conventional coronary fluoroscopy using prototype software. We enrolled 24 patients who underwent CTO PCI using the prototype CT fusion software, and 24 consecutive CTO PCI patients without CT guidance served as a control group. Mean age was 66 ± 11 years, and 43/48 patients were men. Real-time CTA fusion during CTO PCI provided additional information regarding coronary arterial calcification and tortuosity that generated new insights into antegrade wiring, antegrade dissection/reentry, and retrograde wiring during CTO PCI. Overall CTO success rates and procedural outcomes remained similar between the two groups, despite a trend toward higher complexity in the fusion CTA group. This study demonstrates that real-time automated co-registration of coronary CTA centreline and calcification onto live fluoroscopic images is feasible and provides new insights into CTO PCI, and in particular, antegrade dissection reentry-based CTO PCI. • Real-time semi-automated fusion of CTA/fluoroscopy is feasible during CTO PCI. • CTA fusion data can be toggled on/off as desired during CTO PCI • Real-time CT calcium and centreline overlay could benefit antegrade dissection/reentry-based CTO PCI.
Usefulness of image morphing techniques in cancer treatment by conformal radiotherapy
NASA Astrophysics Data System (ADS)
Atoui, Hussein; Sarrut, David; Miguet, Serge
2004-05-01
Conformal radiotherapy is a cancer treatment technique, that targets high-energy X-rays to tumors with minimal exposure to surrounding healthy tissues. Irradiation ballistics is calculated based on an initial 3D Computerized Tomography (CT) scan. At every treatment session, the random positioning of the patient, compared to the reference position defined by the initial 3D CT scan, can generate treatment inaccuracies. Positioning errors potentially predispose to dangerous exposure to healthy tissues as well as insufficient irradiation to the tumor. A proposed solution would be the use of portal images generated by Electronic Portal Imaging Devices (EPID). Portal images (PI) allow a comparison with reference images retained by physicians, namely Digitally Reconstructed Radiographs (DRRs). At present, physicians must estimate patient positional errors by visual inspection. However, this may be inaccurate and consumes time. The automation of this task has been the subject of many researches. Unfortunately, the intensive use of DRRs and the high computing time required have prevented real time implementation. We are currently investigating a new method for DRR generation that calculates intermediate DRRs by 2D deformation of previously computed DRRs. We approach this investigation with the use of a morphing-based technique named mesh warping.
Real-time Tsunami Inundation Prediction Using High Performance Computers
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2014-12-01
Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
NASA Astrophysics Data System (ADS)
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J.
2017-04-01
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves.
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J.
2017-01-01
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves. PMID:28429754
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J
2017-04-21
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves.
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
Effects of computing time delay on real-time control systems
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Cui, Xianzhong
1988-01-01
The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.
Use of high performance networks and supercomputers for real-time flight simulation
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1993-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.
High performance real-time flight simulation at NASA Langley
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1994-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.
Role of the BIPM in UTC Dissemination to the Real Time User
NASA Technical Reports Server (NTRS)
Quinn, T. J.; Thomas, C.
1996-01-01
The generation and dissemination of International Atomic Time (TAI), and Coordinated Universal Time (UTC) are explicitly mentioned in the list of principal tasks of the Bureau International des Poids et Mesures (BIPM), that appears in the Compes Rendus of the the 18e Conference Generales des Poids et Measures, in 1987. These time scales are used as the ultimate reference in the most demanding scientific applications and must, therefore, be of the best metrological quality in terms of reliability, long term stability, and conformity of the scale interval with the second, the unit of time of the International System of Units. To meet these requirements, it is necessary that the readings of the atomic clocks, spread all over the world, that are used as basic timing data for TAI and UTC generation, must be combined in the most efficient way possible. In particular, to take full advantage of the quality of each contributing clock calls for observation of its performance over a sufficiently long time. At present, the computation period treats data in blocks covering two months. TAI and UTC are thus deferred-time scales that cannot be immediately available to real-time users. The BIPM can, nevertheless be of help to real-time users. The predictability of UTC is a fundamental attribute of the scale for institutions responsible for the dissemination of real-time time scales. It allows them to improve their local representations of UTC and, thus, implement a more thorough steering of the time scales diffused in real-time. With a view to improving the predicatbility of UTC, the BIPM examines in detail timing techniques and basic theories in order to propose alternative solutions for timing algorithms. This, coupled with a recent improvement of timing data, makes UTC more stable and thus, more predictable. At a more practical level, effort is being devoted to putting in place automatic procedures for reducing the time needed for data collection and treatment: monthly results are already available ten days earlier than before.
Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment
ERIC Educational Resources Information Center
He, Aiguo
2011-01-01
Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…
1990-04-23
developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for
Design of teleoperation system with a force-reflecting real-time simulator
NASA Technical Reports Server (NTRS)
Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito
1994-01-01
We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.
"Virtual Cockpit Window" for a Windowless Aerospacecraft
NASA Technical Reports Server (NTRS)
Abernathy, Michael F.
2003-01-01
A software system processes navigational and sensory information in real time to generate a three-dimensional-appearing image of the external environment for viewing by crewmembers of a windowless aerospacecraft. The design of the particular aerospacecraft (the X-38) is such that the addition of a real transparent cockpit window to the airframe would have resulted in unacceptably large increases in weight and cost. When exerting manual control, an aircrew needs to see terrain, obstructions, and other features around the aircraft in order to land safely. The X-38 is capable of automated landing, but even when this capability is utilized, the crew still needs to view the external environment: From the very beginning of the United States space program, crews have expressed profound dislike for windowless vehicles. The wellbeing of an aircrew is considerably promoted by a three-dimensional view of terrain and obstructions. The present software system was developed to satisfy the need for such a view. In conjunction with a computer and display equipment that weigh less than would a real transparent window, this software system thus provides a virtual cockpit window. The key problem in the development of this software system was to create a realistic three-dimensional perspective view that is updated in real time. The problem was solved by building upon a pre-existing commercial program LandForm C3 that combines the speed of flight-simulator software with the power of geographic-information-system software to generate real-time, three-dimensional-appearing displays of terrain and other features of flight environments. In the development of the present software, the pre-existing program was modified to enable it to utilize real-time information on the position and attitude of the aerospacecraft to generate a view of the external world as it would appear to a person looking out through a window in the aerospacecraft. The development included innovations in realistic horizon-limit modeling, three-dimensional stereographic display, and interfaces for utilization of data from inertial-navigation devices, Global Positioning System receivers, and laser rangefinders.
Developing an Effective and Efficient Real Time Strategy Agent for Use as a Computer Generated Force
2010-03-01
Coello, Carlos A., Gary B. Lamont, and David A. Van Veldhuizen . Evolution- ary Algorithms for Solving Multi-Objective Problems (Genetic and...Practice. Oxford Univer- sity Press, 1996. 4. Bakkes, Sander, Philip Kerbusch, Pieter Spronck, and Jaap van den Herik. “Au- tomatically Evaluating...Pieter Spronck, and Jaap van den Herik. “Phase-dependent Evaluation in RTS games”. Proceedings of the 19th Belgian-Dutch Conference on Artificial
Assembly, Integration, and Test Methods for Operationally Responsive Space Satellites
2010-03-01
like assembly and vibration tests, to ensure there have been no failures induced by the activities. External thermal control blankets and radiator...configuration of the satellite post- vibration test and adds time to the process. • Thermal blanketing is not realistic with current technology or...patterns for thermal blankets and radiator tape. The computer aided drawing (CAD) solid model was used to generate patterns that were cut and applied real
Versatile single-chip event sequencer for atomic physics experiments
NASA Astrophysics Data System (ADS)
Eyler, Edward
2010-03-01
A very inexpensive dsPIC microcontroller with internal 32-bit counters is used to produce a flexible timing signal generator with up to 16 TTL-compatible digital outputs, with a time resolution and accuracy of 50 ns. This time resolution is easily sufficient for event sequencing in typical experiments involving cold atoms or laser spectroscopy. This single-chip device is capable of triggered operation and can also function as a sweeping delay generator. With one additional chip it can also concurrently produce accurately timed analog ramps, and another one-chip addition allows real-time control from an external computer. Compared to an FPGA-based digital pattern generator, this design is slower but simpler and more flexible, and it can be reprogrammed using ordinary `C' code without special knowledge. I will also describe the use of the same microcontroller with additional hardware to implement a digital lock-in amplifier and PID controller for laser locking, including a simple graphics-based control unit. This work is supported in part by the NSF.
Quantum information processing with a travelling wave of light
NASA Astrophysics Data System (ADS)
Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira
2018-02-01
We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.
Sun, P C; Fainman, Y
1990-09-01
An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.
Gutiérrez-Maldonado, José; Wiederhold, Brenda K; Riva, Giuseppe
2016-02-01
Transdisciplinary efforts for further elucidating the etiology of eating and weight disorders and improving the effectiveness of the available evidence-based interventions are imperative at this time. Recent studies indicate that computer-generated graphic environments-virtual reality (VR)-can integrate and extend existing treatments for eating and weight disorders (EWDs). Future possibilities for VR to improve actual approaches include its use for altering in real time the experience of the body (embodiment) and as a cue exposure tool for reducing food craving.
Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G
1999-01-01
An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.
Computer soundcard as an AC signal generator and oscilloscope for the physics laboratory
NASA Astrophysics Data System (ADS)
Sinlapanuntakul, Jinda; Kijamnajsuk, Puchong; Jetjamnong, Chanthawut; Chotikaprakhan, Sutharat
2018-01-01
The purpose of this paper is to develop both an AC signal generator and a dual-channel oscilloscope based on standard personal computer equipped with sound card as parts of the laboratory of the fundamental physics and the introduction to electronics classes. The setup turns the computer into the two channel measured device which can provides sample rate, simultaneous sampling, frequency range, filters and others essential capabilities required to perform amplitude, phase and frequency measurements of AC signal. The AC signal also generate from the same computer sound card output simultaneously in any waveform such as sine, square, triangle, saw-toothed pulsed, swept sine and white noise etc. These can convert an inexpensive PC sound card into powerful device, which allows the students to measure physical phenomena with their own PCs either at home or at university attendance. A graphic user interface software was developed for control and analysis, including facilities for data recording, signal processing and real time measurement display. The result is expanded utility of self-learning for the students in the field of electronics both AC and DC circuits, including the sound and vibration experiments.
Measurement and analysis of workload effects on fault latency in real-time systems
NASA Technical Reports Server (NTRS)
Woodbury, Michael H.; Shin, Kang G.
1990-01-01
The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.
Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.
Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha
2016-08-01
Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.
Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.
1996-04-01
This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In
Modeling a Wireless Network for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Yaprak, Ece; Lamouri, Saad
2000-01-01
This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.
Real-Time-Simulation of IEEE-5-Bus Network on OPAL-RT-OP4510 Simulator
NASA Astrophysics Data System (ADS)
Atul Bhandakkar, Anjali; Mathew, Lini, Dr.
2018-03-01
The Real-Time Simulator tools have high computing technologies, improved performance. They are widely used for design and improvement of electrical systems. The advancement of the software tools like MATLAB/SIMULINK with its Real-Time Workshop (RTW) and Real-Time Windows Target (RTWT), real-time simulators are used extensively in many engineering fields, such as industry, education, and research institutions. OPAL-RT-OP4510 is a Real-Time Simulator which is used in both industry and academia. In this paper, the real-time simulation of IEEE-5-Bus network is carried out by means of OPAL-RT-OP4510 with CRO and other hardware. The performance of the network is observed with the introduction of fault at various locations. The waveforms of voltage, current, active and reactive power are observed in the MATLAB simulation environment and on the CRO. Also, Load Flow Analysis (LFA) of IEEE-5-Bus network is computed using MATLAB/Simulink power-gui load flow tool.
Real-time WAMI streaming target tracking in fog
NASA Astrophysics Data System (ADS)
Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe
2016-05-01
Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.
Real-time, autonomous precise satellite orbit determination using the global positioning system
NASA Astrophysics Data System (ADS)
Goldstein, David Ben
2000-10-01
The desire for autonomously generated, rapidly available, and highly accurate satellite ephemeris is growing with the proliferation of constellations of satellites and the cost and overhead of ground tracking resources. Autonomous Orbit Determination (OD) may be done on the ground in a post-processing mode or in real-time on board a satellite and may be accomplished days, hours or immediately after observations are processed. The Global Positioning System (GPS) is now widely used as an alternative to ground tracking resources to supply observation data for satellite positioning and navigation. GPS is accurate, inexpensive, provides continuous coverage, and is an excellent choice for autonomous systems. In an effort to estimate precise satellite ephemeris in real-time on board a satellite, the Goddard Space Flight Center (GSFC) created the GPS Enhanced OD Experiment (GEODE) flight navigation software. This dissertation offers alternative methods and improvements to GEODE to increase on board autonomy and real-time total position accuracy and precision without increasing computational burden. First, GEODE is modified to include a Gravity Acceleration Approximation Function (GAAF) to replace the traditional spherical harmonic representation of the gravity field. Next, an ionospheric correction method called Differenced Range Versus Integrated Doppler (DRVID) is applied to correct for ionospheric errors in the GPS measurements used in GEODE. Then, Dynamic Model Compensation (DMC) is added to estimate unmodeled and/or mismodeled forces in the dynamic model and to provide an alternative process noise variance-covariance formulation. Finally, a Genetic Algorithm (GA) is implemented in the form of Genetic Model Compensation (GMC) to optimize DMC forcing noise parameters. Application of GAAF, DRVID and DMC improved GEODE's position estimates by 28.3% when applied to GPS/MET data collected in the presence of Selective Availability (SA), 17.5% when SA is removed from the GPS/MET data and 10.8% on SA free TOPEX data. Position estimates with RSS errors below I meter are now achieved using SA free TOPEX data. DRVID causes an increase in computational burden while GAAF and DMC reduce computational burden. The net effect of applying GAAF, DRVID and DMC is an improvement in GEODE's accuracy/precision without an increase in computational burden.
Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks
2016-08-11
Segment-Fixed Priority Scheduling for Self-Suspending Real -Time Tasks Junsung Kim, Department of Electrical and Computer Engineering, Carnegie...4 2.1 Application of a Multi-Segment Self-Suspending Real -Time Task Model ............................. 5 3 Fixed Priority Scheduling...1 Figure 2: A multi-segment self-suspending real -time task model
Teleoperation with virtual force feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.J.
1993-08-01
In this paper we describe an algorithm for generating virtual forces in a bilateral teleoperator system. The virtual forces are generated from a world model and are used to provide real-time obstacle avoidance and guidance capabilities. The algorithm requires that the slaves tool and every object in the environment be decomposed into convex polyhedral Primitives. Intrusion distance and extraction vectors are then derived at every time step by applying Gilbert`s polyhedra distance algorithm, which has been adapted for the task. This information is then used to determine the compression and location of nonlinear virtual spring-dampers whose total force is summedmore » and applied to the manipulator/teleoperator system. Experimental results validate the whole approach, showing that it is possible to compute the algorithm and generate realistic, useful psuedo forces for a bilateral teleoperator system using standard VME bus hardware.« less
Dynamical aspects of behavior generation under constraints
Harter, Derek; Achunala, Srinivas
2007-01-01
Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514
Real Time Metrics and Analysis of Integrated Arrival, Departure, and Surface Operations
NASA Technical Reports Server (NTRS)
Sharma, Shivanjli; Fergus, John
2017-01-01
A real time dashboard was developed in order to inform and present users notifications and integrated information regarding airport surface operations. The dashboard is a supplement to capabilities and tools that incorporate arrival, departure, and surface air-traffic operations concepts in a NextGen environment. As trajectory-based departure scheduling and collaborative decision making tools are introduced in order to reduce delays and uncertainties in taxi and climb operations across the National Airspace System, users across a number of roles benefit from a real time system that enables common situational awareness. In addition to shared situational awareness the dashboard offers the ability to compute real time metrics and analysis to inform users about capacity, predictability, and efficiency of the system as a whole. This paper describes the architecture of the real time dashboard as well as an initial set of metrics computed on operational data. The potential impact of the real time dashboard is studied at the site identified for initial deployment and demonstration in 2017; Charlotte-Douglas International Airport. Analysis and metrics computed in real time illustrate the opportunity to provide common situational awareness and inform users of metrics across delay, throughput, taxi time, and airport capacity. In addition, common awareness of delays and the impact of takeoff and departure restrictions stemming from traffic flow management initiatives are explored. The potential of the real time tool to inform the predictability and efficiency of using a trajectory-based departure scheduling system is also discussed.
RighTime: A real time clock correcting program for MS-DOS-based computer systems
NASA Technical Reports Server (NTRS)
Becker, G. Thomas
1993-01-01
A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market.
Cocco, Luisanna; Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
BENEFITS OF SEWERAGE SYSTEM REAL-TIME CONTROL
Real-time control (RTC) is a custom-designed computer-assisted management system for a specific urban sewerage network that is activated during a wet-weather flow event. Though uses of RTC systems had started in the mid 60s, recent developments in computers, telecommunication, in...
The Temporal Dimension of Linguistic Prediction
ERIC Educational Resources Information Center
Chow, Wing Yee
2013-01-01
This thesis explores how predictions about upcoming language inputs are computed during real-time language comprehension. Previous research has demonstrated humans' ability to use rich contextual information to compute linguistic prediction during real-time language comprehension, and it has been widely assumed that contextual information can…
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2016-09-01
With the increasing use of different types of auctions in market designing, modeling of participants' behaviors to evaluate the market structure is one of the main discussions in the studies related to the deregulated power industries. In this article, we apply an approach of the optimal bidding behavior to the Iran wholesale electricity market as a restructured electric power industry and model how the participants of the market bid in the spot electricity market. The problem is formulated analytically using the Nash equilibrium concept composed of large numbers of players having discrete and very large strategy spaces. Then, we compute and draw supply curve of the competitive market in which all generators' proposed prices are equal to their marginal costs and supply curve of the real market in which the pricing mechanism is pay-as-bid. We finally calculate the lost welfare or inefficiency of the Nash equilibrium and the real market by comparing their supply curves with the competitive curve. We examine 3 cases on November 24 (2 cases) and July 24 (1 case), 2012. It is observed that in the Nash equilibrium on November 24 and demand of 23,487 MW, there are 212 allowed plants for the first case (plants are allowed to choose any quantity of generation except one of them that should be equal to maximum Power) and the economic efficiency or social welfare of Nash equilibrium is 2.77 times as much as the real market. In addition, there are 184 allowed plants for the second case (plants should offer their maximum power with different prices) and the efficiency or social welfare of Nash equilibrium is 3.6 times as much as the real market. On July 24 and demand of 42,421 MW, all 370 plants should generate maximum energy due to the high electricity demand that the economic efficiency or social welfare of the Nash equilibrium is about 2 times as much as the real market.
Lange, Nicholas D.; Thomas, Rick P.; Davelaar, Eddy J.
2012-01-01
The pre-decisional process of hypothesis generation is a ubiquitous cognitive faculty that we continually employ in an effort to understand our environment and thereby support appropriate judgments and decisions. Although we are beginning to understand the fundamental processes underlying hypothesis generation, little is known about how various temporal dynamics, inherent in real world generation tasks, influence the retrieval of hypotheses from long-term memory. This paper presents two experiments investigating three data acquisition dynamics in a simulated medical diagnosis task. The results indicate that the mere serial order of data, data consistency (with previously generated hypotheses), and mode of responding influence the hypothesis generation process. An extension of the HyGene computational model endowed with dynamic data acquisition processes is forwarded and explored to provide an account of the present data. PMID:22754547
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
CUDA-based real time surgery simulation.
Liu, Youquan; De, Suvranu
2008-01-01
In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.
An introduction to real-time graphical techniques for analyzing multivariate data
NASA Astrophysics Data System (ADS)
Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner
1987-08-01
Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
Perceptual factors that influence use of computer enhanced visual displays
NASA Technical Reports Server (NTRS)
Littman, David; Boehm-Davis, Debbie
1993-01-01
This document is the final report for the NASA/Langley contract entitled 'Perceptual Factors that Influence Use of Computer Enhanced Visual Displays.' The document consists of two parts. The first part contains a discussion of the problem to which the grant was addressed, a brief discussion of work performed under the grant, and several issues suggested for follow-on work. The second part, presented as Appendix I, contains the annual report produced by Dr. Ann Fulop, the Postdoctoral Research Associate who worked on-site in this project. The main focus of this project was to investigate perceptual factors that might affect a pilot's ability to use computer generated information that is projected into the same visual space that contains information about real world objects. For example, computer generated visual information can identify the type of an attacking aircraft, or its likely trajectory. Such computer generated information must not be so bright that it adversely affects a pilot's ability to perceive other potential threats in the same volume of space. Or, perceptual attributes of computer generated and real display components should not contradict each other in ways that lead to problems of accommodation and, thus, distance judgments. The purpose of the research carried out under this contract was to begin to explore the perceptual factors that contribute to effective use of these displays.
Finite-Element Methods for Real-Time Simulation of Surgery
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay
2003-01-01
Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.
A GPU-based mipmapping method for water surface visualization
NASA Astrophysics Data System (ADS)
Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan
2018-03-01
Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
A real-time computer for monitoring a rapid-scanning Fourier spectrometer
NASA Technical Reports Server (NTRS)
Michel, G.
1973-01-01
A real-time Fourier computer has been designed and tested as part of the Lunar and Planetary Laboratory's program of airborne infrared astronomy using Fourier spectroscopy. The value and versatility of this device are demonstrated with specific examples of laboratory and in-flight applications.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Location-Based Augmented Reality for Mobile Learning: Algorithm, System, and Implementation
ERIC Educational Resources Information Center
Tan, Qing; Chang, William; Kinshuk
2015-01-01
AR technology can be considered as mainly consisting of two aspects: identification of real-world object and display of computer-generated digital contents related the identified real-world object. The technical challenge of mobile AR is to identify the real-world object that mobile device's camera aim at. In this paper, we will present a…
NASA Astrophysics Data System (ADS)
Kalivarapu, Vijay K.; Serrate, Ciro; Hadimani, Ravi L.
2017-05-01
Transcranial Magnetic Stimulation (TMS) is a non-invasive procedure that uses time varying short pulses of magnetic fields to stimulate nerve cells in the brain. In this method, a magnetic field generator ("TMS coil") produces small electric fields in the region of the brain via electromagnetic induction. This technique can be used to excite or inhibit firing of neurons, which can then be used for treatment of various neurological disorders such as Parkinson's disease, stroke, migraine, and depression. It is however challenging to focus the induced electric field from TMS coils to smaller regions of the brain. Since electric and magnetic fields are governed by laws of electromagnetism, it is possible to numerically simulate and visualize these fields to accurately determine the site of maximum stimulation and also to develop TMS coils that can focus the fields on the targeted regions. However, current software to compute and visualize these fields are not real-time and can work for only one position/orientation of TMS coil, severely limiting their usage. This paper describes the development of an application that computes magnetic flux densities (h-fields) and visualizes their distribution for different TMS coil position/orientations in real-time using GPU shaders. The application is developed for desktop, commodity VR (HTC Vive), and fully immersive VR CAVETM systems, for use by researchers, scientists, and medical professionals to quickly and effectively view the distribution of h-fields from MRI brain scans.
Real-Time Fourier Synthesis of Ensembles with Timbral Interpolation
NASA Astrophysics Data System (ADS)
Haken, Lippold
1990-01-01
In Fourier synthesis, natural musical sounds are produced by summing time-varying sinusoids. Sounds are analyzed to find the amplitude and frequency characteristics for their sinusoids; interpolation between the characteristics of several sounds is used to produce intermediate timbres. An ensemble can be synthesized by summing all the sinusoids for several sounds, but in practice it is difficult to perform such computations in real time. To solve this problem on inexpensive hardware, it is useful to take advantage of the masking effects of the auditory system. By avoiding the computations for perceptually unimportant sinusoids, and by employing other computation reduction techniques, a large ensemble may be synthesized in real time on the Platypus signal processor. Unlike existing computation reduction techniques, the techniques described in this thesis do not sacrifice independent fine control over the amplitude and frequency characteristics of each sinusoid.
Miller, Thomas Martin; de Wet, Wouter C.; Patton, Bruce W.
2015-10-28
In this study, a computational assessment of the variation in terrestrial neutron and photon background from extraterrestrial sources is presented. The motivation of this assessment is to evaluate the practicality of developing a tool or database to estimate background in real time (or near–real time) during an experimental measurement or to even predict the background for future measurements. The extraterrestrial source focused on during this assessment is naturally occurring galactic cosmic rays (GCRs). The MCNP6 transport code was used to perform the computational assessment. However, the GCR source available in MCNP6 was not used. Rather, models developed and maintained bymore » NASA were used to generate the GCR sources. The largest variation in both neutron and photon background spectra was found to be caused by changes in elevation on Earth's surface, which can be as large as an order of magnitude. All other perturbations produced background variations on the order of a factor of 3 or less. The most interesting finding was that ~80% and 50% of terrestrial background neutrons and photons, respectively, are generated by interactions in Earth's surface and other naturally occurring and man-made objects near a detector of particles from extraterrestrial sources and their progeny created in Earth's atmosphere. In conclusion, this assessment shows that it will be difficult to estimate the terrestrial background from extraterrestrial sources without a good understanding of a detector's surroundings. Therefore, estimating or predicting background during a measurement environment like a mobile random search will be difficult.« less
TORC3: Token-ring clearing heuristic for currency circulation
NASA Astrophysics Data System (ADS)
Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael
2012-10-01
Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.
Real-time wideband cylindrical holographic surveillance system
Sheen, D.M.; McMakin, D.L.; Hall, T.E.; Severtsen, R.H.
1999-01-12
A wideband holographic cylindrical surveillance system is disclosed including a transceiver for generating a plurality of electromagnetic waves; antenna for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; the transceiver also receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; a computer for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and a display for displaying the processed information to determine nature of the target. The computer has instructions to apply Fast Fourier Transforms and obtain a three dimensional cylindrical image. 13 figs.
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
Just-in-time adaptive classifiers-part II: designing the classifier.
Alippi, Cesare; Roveri, Manuel
2008-12-01
Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.
Importance of Vibronic Effects in the UV-Vis Spectrum of the 7,7,8,8-Tetracyanoquinodimethane Anion.
Tapavicza, Enrico; Furche, Filipp; Sundholm, Dage
2016-10-11
We present a computational method for simulating vibronic absorption spectra in the ultraviolet-visible (UV-vis) range and apply it to the 7,7,8,8-tetracyanoquinodimethane anion (TCNQ - ), which has been used as a ligand in black absorbers. Gaussian broadening of vertical electronic excitation energies of TCNQ - from linear-response time-dependent density functional theory produces only one band, which is qualitatively incorrect. Thus, the harmonic vibrational modes of the two lowest doublet states were computed, and the vibronic UV-vis spectrum was simulated using the displaced harmonic oscillator approximation, the frequency-shifted harmonic oscillator approximation, and the full Duschinsky formalism. An efficient real-time generating function method was implemented to avoid the exponential complexity of conventional Franck-Condon approaches to vibronic spectra. The obtained UV-vis spectra for TCNQ - agree well with experiment; the Duschinsky rotation is found to have only a minor effect on the spectrum. Born-Oppenheimer molecular dynamics simulations combined with calculations of the electronic excitation energies for a large number of molecular structures were also used for simulating the UV-vis spectrum. The Born-Oppenheimer molecular dynamics simulations yield a broadening of the energetically lowest peak in the absorption spectrum, but additional vibrational bands present in the experimental and simulated quantum harmonic oscillator spectra are not observed in the molecular dynamics simulations. Our results underline the importance of vibronic effects for the UV-vis spectrum of TCNQ - , and they establish an efficient method for obtaining vibronic spectra using a combination of linear-response time-dependent density functional theory and a real-time generating function approach.
Cryotherapy simulator for localized prostate cancer.
Hahn, James K; Manyak, Michael J; Jin, Ge; Kim, Dongho; Rewcastle, John; Kim, Sunil; Walsh, Raymond J
2002-01-01
Cryotherapy is a treatment modality that uses a technique to selectively freeze tissue and thereby cause controlled tissue destruction. The procedure involves placement of multiple small diameter probes through the perineum into the prostate tissue at selected spatial intervals. Transrectal ultrasound is used to properly position the cylindrical probes before activation of the liquid Argon cooling element, which lowers the tissue temperature below -40 degrees Centigrade. Tissue effect is monitored by transrectal ultrasound changes as well as thermocouples placed in the tissue. The computer-based cryotherapy simulation system mimics the major surgical steps involved in the procedure. The simulated real-time ultrasound display is generated from 3-D ultrasound datasets where the interaction of the ultrasound with the instruments as well as the frozen tissue is simulated by image processing. The thermal and mechanical simulations of the tissue are done using a modified finite-difference/finite-element method optimized for real-time performance. The simulator developed is a part of a comprehensive training program, including a computer-based learning system and hands-on training program with a proctor, designed to familiarize the physician with the technique and equipment involved.
Real-time sensing of optical alignment
NASA Technical Reports Server (NTRS)
Stier, Mark T.; Wissinger, Alan B.
1988-01-01
The Large Deployable Reflector and other future segmented optical systems may require autonomous, real-time alignment of their optical surfaces. Researchers have developed gratings located directly on a mirror surface to provide interferometric sensing of the location and figure of the mirror. The grating diffracts a small portion of the incident beam to a diffractive focus where the designed diagnostics can be performed. Mirrors with diffraction gratings were fabricated in two separate ways. The formation of a holographic grating over the entire surface of a mirror, thereby forming a Zone Plate Mirror (ZPM) is described. Researchers have also used computer-generated hologram (CGH) patches for alignment and figure sensing of mirrors. When appropriately illuminated, a grid of patches spread over a mirror segment will yield a grid of point images at a wavefront sensor, with the relative location of the points providing information on the figure and location of the mirror. A particular advantage of using the CGH approach is that the holographic patches can be computed, fabricated, and replicated on a mirror segment in a mass production 1-g clean room environment.
NASA Technical Reports Server (NTRS)
1997-01-01
Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.
Notification of real-time clinical alerts generated by pharmacy expert systems.
Miller, J. E.; Reichley, R. M.; McNamee, L. A.; Steib, S. A.; Bailey, T. C.
1999-01-01
We developed and implemented a strategy for notifying clinical pharmacists of alerts generated in real-time by two pharmacy expert systems: one for drug dosing and the other for adverse drug event prevention. Display pagers were selected as the preferred notification method and a concise, yet readable, format for displaying alert data was developed. This combination of real-time alert generation and notification via display pagers was shown to be efficient and effective in a 30-day trial. PMID:10566374
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society
Collection and processing of data from a phase-coherent meteor radar
NASA Technical Reports Server (NTRS)
Backof, C. A., Jr.; Bowhill, S. A.
1974-01-01
An analysis of the measurement accuracy requirement of a high resolution meteor radar for observing short period, atmospheric waves is presented, and a system which satisfies the requirements is described. A medium scale, real time computer is programmed to perform all echo recognition and coordinate measurement functions. The measurement algorithms are exercised on noisy data generated by a program which simulates the hardware system, in order to find the effects of noise on the measurement accuracies.
Development of automation and robotics for space via computer graphic simulation methods
NASA Technical Reports Server (NTRS)
Fernandez, Ken
1988-01-01
A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.
Rapid Sequencing of Complete env Genes from Primary HIV-1 Samples.
Laird Smith, Melissa; Murrell, Ben; Eren, Kemal; Ignacio, Caroline; Landais, Elise; Weaver, Steven; Phung, Pham; Ludka, Colleen; Hepler, Lance; Caballero, Gemma; Pollner, Tristan; Guo, Yan; Richman, Douglas; Poignard, Pascal; Paxinos, Ellen E; Kosakovsky Pond, Sergei L; Smith, Davey M
2016-07-01
The ability to study rapidly evolving viral populations has been constrained by the read length of next-generation sequencing approaches and the sampling depth of single-genome amplification methods. Here, we develop and characterize a method using Pacific Biosciences' Single Molecule, Real-Time (SMRT®) sequencing technology to sequence multiple, intact full-length human immunodeficiency virus-1 env genes amplified from viral RNA populations circulating in blood, and provide computational tools for analyzing and visualizing these data.
NASA Technical Reports Server (NTRS)
Clukey, Steven J.
1988-01-01
The high speed Dynamic Data Acquisition System (DDAS) is described which provides the capability for the simultaneous measurement of velocity, density, and total temperature fluctuations. The system of hardware and software is described in context of the wind tunnel environment. The DDAS replaces both a recording mechanism and a separate data processing system. The data acquisition and data reduction process has been combined within DDAS. DDAS receives input from hot wires and anemometers, amplifies and filters the signals with computer controlled modules, and converts the analog signals to digital with real-time simultaneous digitization followed by digital recording on disk or tape. Automatic acquisition (either from a computer link to an existing wind tunnel acquisition system, or from data acquisition facilities within DDAS) collects necessary calibration and environment data. The generation of hot wire sensitivities is done in DDAS, as is the application of sensitivities to the hot wire data to generate turbulence quantities. The presentation of the raw and processed data, in terms of root mean square values of velocity, density and temperature, and the processing of the spectral data is accomplished on demand in near-real-time- with DDAS. A comprehensive description of the interface to the DDAS and of the internal mechanisms will be prosented. A summary of operations relevant to the use of the DDAS will be provided.
Multi-resolution model-based traffic sign detection and tracking
NASA Astrophysics Data System (ADS)
Marinas, Javier; Salgado, Luis; Camplani, Massimo
2012-06-01
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Reducing adaptive optics latency using Xeon Phi many-core processors
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah
2015-11-01
The next generation of Extremely Large Telescopes (ELTs) for astronomy will rely heavily on the performance of their adaptive optics (AO) systems. Real-time control is at the heart of the critical technologies that will enable telescopes to deliver the best possible science and will require a very significant extrapolation from current AO hardware existing for 4-10 m telescopes. Investigating novel real-time computing architectures and testing their eligibility against anticipated challenges is one of the main priorities of technology development for the ELTs. This paper investigates the suitability of the Intel Xeon Phi, which is a commercial off-the-shelf hardware accelerator. We focus on wavefront reconstruction performance, implementing a straightforward matrix-vector multiplication (MVM) algorithm. We present benchmarking results of the Xeon Phi on a real-time Linux platform, both as a standalone processor and integrated into an existing real-time controller (RTC). Performance of single and multiple Xeon Phis are investigated. We show that this technology has the potential of greatly reducing the mean latency and variations in execution time (jitter) of large AO systems. We present both a detailed performance analysis of the Xeon Phi for a typical E-ELT first-light instrument along with a more general approach that enables us to extend to any AO system size. We show that systematic and detailed performance analysis is an essential part of testing novel real-time control hardware to guarantee optimal science results.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U
2015-03-01
The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
The VLBA correlator: Real-time in the distributed era
NASA Technical Reports Server (NTRS)
Wells, D. C.
1992-01-01
The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.
The design of real time infrared image generation software based on Creator and Vega
NASA Astrophysics Data System (ADS)
Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu
2013-09-01
Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Towards real-time remote processing of laparoscopic video
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.
2015-03-01
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.
The IRGen infrared data base modeler
NASA Technical Reports Server (NTRS)
Bernstein, Uri
1993-01-01
IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Wu, Di; Kalsi, Karanjit
This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
a Fast and Flexible Method for Meta-Map Building for Icp Based Slam
NASA Astrophysics Data System (ADS)
Kurian, A.; Morin, K. W.
2016-06-01
Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
Real-World Neuroimaging Technologies
2013-05-10
system enables long-term wear of up to 10 consecutive hours of operation time. The system’s wireless technologies, light weight (200g), and dry sensor ...biomarkers, body sensor networks , brain computer interactionbrain, computer interfaces, data acquisition, electroencephalography monitoring, translational...brain activity in real-world scenarios. INDEX TERMS Behavioral science, biomarkers, body sensor networks , brain computer interfaces, brain computer
NASA Technical Reports Server (NTRS)
Kyle, R. G.
1972-01-01
Information transfer between the operator and computer-generated display systems is an area where the human factors engineer discovers little useful design data relating human performance to system effectiveness. This study utilized a computer-driven, cathode-ray-tube graphic display to quantify human response speed in a sequential information processing task. The performance criteria was response time to sixteen cell elements of a square matrix display. A stimulus signal instruction specified selected cell locations by both row and column identification. An equal probable number code, from one to four, was assigned at random to the sixteen cells of the matrix and correspondingly required one of four, matched keyed-response alternatives. The display format corresponded to a sequence of diagnostic system maintenance events, that enable the operator to verify prime system status, engage backup redundancy for failed subsystem components, and exercise alternate decision-making judgements. The experimental task bypassed the skilled decision-making element and computer processing time, in order to determine a lower bound on the basic response speed for given stimulus/response hardware arrangement.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Input Devices and Interaction Techniques for VR-Enhanced Medicine
NASA Astrophysics Data System (ADS)
Gallo, Luigi; Pietro, Giuseppe De
Virtual Reality (VR) technologies make it possible to reproduce faithfully real life events in computer-generated scenarios. This approach has the potential to simplify the way people solve problems, since they can take advantage of their real life experiences while interacting in synthetic worlds.
NASA Technical Reports Server (NTRS)
Bordano, Aldo; Uhde-Lacovara, JO; Devall, Ray; Partin, Charles; Sugano, Jeff; Doane, Kent; Compton, Jim
1993-01-01
The Navigation, Control and Aeronautics Division (NCAD) at NASA-JSC is exploring ways of producing Guidance, Navigation and Control (GN&C) flight software faster, better, and cheaper. To achieve these goals NCAD established two hardware/software facilities that take an avionics design project from initial inception through high fidelity real-time hardware-in-the-loop testing. Commercially available software products are used to develop the GN&C algorithms in block diagram form and then automatically generate source code from these diagrams. A high fidelity real-time hardware-in-the-loop laboratory provides users with the capability to analyze mass memory usage within the targeted flight computer, verify hardware interfaces, conduct system level verification, performance, acceptance testing, as well as mission verification using reconfigurable and mission unique data. To evaluate these concepts and tools, NCAD embarked on a project to build a real-time 6 DOF simulation of the Soyuz Assured Crew Return Vehicle flight software. To date, a productivity increase of 185 percent has been seen over traditional NASA methods for developing flight software.
Self-balanced real-time photonic scheme for ultrafast random number generation
NASA Astrophysics Data System (ADS)
Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang
2018-06-01
We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
Exploiting the Dynamics of Soft Materials for Machine Learning
Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-01-01
Abstract Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world. PMID:29708857
Exploiting the Dynamics of Soft Materials for Machine Learning.
Nakajima, Kohei; Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-06-01
Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world.
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.
Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent
2016-08-01
Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.
Dorval, A D; Christini, D J; White, J A
2001-10-01
We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.
Using computers to overcome math-phobia in an introductory course in musical acoustics
NASA Astrophysics Data System (ADS)
Piacsek, Andrew A.
2002-11-01
In recent years, the desktop computer has acquired the signal processing and visualization capabilities once obtained only with expensive specialized equipment. With the appropriate A/D card and software, a PC can behave like an oscilloscope, a real-time signal analyzer, a function generator, and a synthesizer, with both audio and visual outputs. In addition, the computer can be used to visualize specific wave behavior, such as superposition and standing waves, refraction, dispersion, etc. These capabilities make the computer an invaluable tool to teach basic acoustic principles to students with very poor math skills. In this paper I describe my approach to teaching the introductory-level Physics of Musical Sound at Central Washington University, in which very few science students enroll. Emphasis is placed on how vizualization with computers can help students appreciate and apply quantitative methods for analyzing sound.
NASA Technical Reports Server (NTRS)
Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)
1995-01-01
NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).
Fast and accurate mock catalogue generation for low-mass galaxies
NASA Astrophysics Data System (ADS)
Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe
2016-06-01
We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.
Computer aiding for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Swenson, Harry N.
1991-01-01
A computer-aiding concept for low-altitude helicopter flight was developed and evaluated in a real-time piloted simulation. The concept included an optimal control trajectory-generated algorithm based on dynamic programming, and a head-up display (HUD) presentation of a pathway-in-the-sky, a phantom aircraft, and flight-path vector/predictor symbol. The trajectory-generation algorithm uses knowledge of the global mission requirements, a digital terrain map, aircraft performance capabilities, and advanced navigation information to determine a trajectory between mission waypoints that minimizes threat exposure by seeking valleys. The pilot evaluation was conducted at NASA Ames Research Center's Sim Lab facility in both the fixed-base Interchangeable Cab (ICAB) simulator and the moving-base Vertical Motion Simulator (VMS) by pilots representing NASA, the U.S. Army, and the U.S. Air Force. The pilots manually tracked the trajectory generated by the algorithm utilizing the HUD symbology. They were able to satisfactorily perform the tracking tasks while maintaining a high degree of awareness of the outside world.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Pizzolato, Claudio; Lloyd, David G.; Barrett, Rod S.; Cook, Jill L.; Zheng, Ming H.; Besier, Thor F.; Saxby, David J.
2017-01-01
Musculoskeletal tissues respond to optimal mechanical signals (e.g., strains) through anabolic adaptations, while mechanical signals above and below optimal levels cause tissue catabolism. If an individual's physical behavior could be altered to generate optimal mechanical signaling to musculoskeletal tissues, then targeted strengthening and/or repair would be possible. We propose new bioinspired technologies to provide real-time biofeedback of relevant mechanical signals to guide training and rehabilitation. In this review we provide a description of how wearable devices may be used in conjunction with computational rigid-body and continuum models of musculoskeletal tissues to produce real-time estimates of localized tissue stresses and strains. It is proposed that these bioinspired technologies will facilitate a new approach to physical training that promotes tissue strengthening and/or repair through optimal tissue loading. PMID:29093676
NASA Astrophysics Data System (ADS)
Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi
2015-06-01
Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.
NASA Astrophysics Data System (ADS)
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Wei, Yong; Mazzoni, Augusto; Crespi, Mattia
2017-04-01
Tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances are studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and for the first time, we estimate slant TEC (sTEC) variations in a real-time scenario from GPS and Galileo constellations. Specifically, we study the 2016 New Zealand tsunami event using GNSS receivers with multi-constellation tracking capabilities located in the Pacific region. We compare sTEC estimates obtained using GPS and Galileo constellations. The efficiency of the real-time sTEC estimation using the VARION algorithm has been demonstrated for the 2012 Haida Gwaii tsunami event. TEC variations induced by the tsunami event are computed using 56 GPS receivers in Hawai'i. We observe TEC perturbations with amplitudes up to 0.25 TEC units and traveling ionospheric disturbances moving away from the epicenter at a speed of about 316 m/s. We present comparisons with the real-time tsunami model MOST (Method of Splitting Tsunami) provided by the NOAA Center for Tsunami Research. We observe variations in TEC that correlate well in time and space with the propagating tsunami waves. We conclude that the integration of different satellite constellations is a crucial step forward to increasing the reliability of real-time tsunami detection systems using ground-based GNSS receivers as an augmentation to existing tsunami early warning systems.
NASA Astrophysics Data System (ADS)
Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.
2018-04-01
Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.
Synchronization and fault-masking in redundant real-time systems
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.; Butler, R. W.
1983-01-01
A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration.
Method for laser-based two-dimensional navigation system in a structured environment
Boultinghouse, Karlan D.; Schoeneman, J. Lee; Tise, Bertice L.
1989-01-01
A low power, narrow laser beam, generated by a laser carried by a mobile vehicle, is rotated about a vertical reference axis as the vehicle navigates within a structured environment. At least three stationary retroreflector elements are located at known positions, preferably at the periphery of the structured environment, with one of the elements having a distinctive retroreflection. The projected rotating beam traverses each retroreflector in succession, and the corresponding retroreflections are received at the vehicle and focussed on a photoelectric cell to generate corresponding electrical signals. The signal caused by the distinctive retroreflection serves as an angle-measurement datum. An angle encoder coupled to the apparatus rotating the projected laser beam provides the angular separation from this datum of the lines connecting the mobile reference axis to successive retroreflectors. This real-time angular data is utilized with the known locations of the retroreflectors to trigonometrically compute using three point resection, the exact real-time location of the mobile reference axis (hence the navigating vehicle) vis-a-vis the structured environment, e.g., in terms of two-dimensional Cartesian coordinates associated with the environment.
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Real-time 3D human capture system for mixed-reality art and entertainment.
Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu
2005-01-01
A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.
The indexed time table approach for planning and acting
NASA Technical Reports Server (NTRS)
Ghallab, Malik; Alaoui, Amine Mounir
1989-01-01
A representation is discussed of symbolic temporal relations, called IxTeT, that is both powerful enough at the reasoning level for tasks such as plan generation, refinement and modification, and efficient enough for dealing with real time constraints in action monitoring and reactive planning. Such representation for dealing with time is needed in a teleoperated space robot. After a brief survey of known approaches, the proposed representation shows its computational efficiency for managing a large data base of temporal relations. Reactive planning with IxTeT is described and exemplified through the problem of mission planning and modification for a simple surveying satellite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu
2016-06-21
One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entiremore » ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.« less
Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2009-12-01
Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Using Real-Time Visual Feedback to Improve Posture at Computer Workstations
ERIC Educational Resources Information Center
Sigurdsson, Sigurdur O.; Austin, John
2008-01-01
The purpose of the current study was to examine the effects of a multicomponent intervention that included discrimination training, real-time visual feedback, and self-monitoring on postural behavior at a computer workstation in a simulated office environment. Using a nonconcurrent multiple baseline design across 8 participants, the study assessed…
Nebula: reconstruction and visualization of scattering data in reciprocal space.
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H
2015-04-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.
Nebula: reconstruction and visualization of scattering data in reciprocal space
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.
2015-01-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute timescales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083
A new ChainMail approach for real-time soft tissue simulation.
Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-07-03
This paper presents a new ChainMail method for real-time soft tissue simulation. This method enables the use of different material properties for chain elements to accommodate various materials. Based on the ChainMail bounding region, a new time-saving scheme is developed to improve computational efficiency for isotropic materials. The proposed method also conserves volume and strain energy. Experimental results demonstrate that the proposed ChainMail method can not only accommodate isotropic, anisotropic and heterogeneous materials but also model incompressibility and relaxation behaviors of soft tissues. Further, the proposed method can achieve real-time computational performance.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
Terrain modeling for real-time simulation
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; McArthur, Donald E.
1993-10-01
There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.
NASA Technical Reports Server (NTRS)
Bargar, Robin
1995-01-01
The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.
A real-time chirp-coded imaging system with tissue attenuation compensation.
Ramalli, A; Guidi, F; Boni, E; Tortoli, P
2015-07-01
In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a novel attenuation compensation scheme, which only requires to shift the demodulation frequency by 1 MHz. The proposed method characterizes for its simplicity and easy implementation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling
2014-10-01
Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.
Improving the Capture and Re-Use of Data with Wearable Computers
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Fating, Curtis C.; Green, Daniel; Powers, Edward I. (Technical Monitor)
2001-01-01
At the Goddard Space Flight Center, members of the Real-Time Software Engineering Branch are developing a wearable, wireless, voice-activated computer for use in a wide range of crosscutting space applications that would benefit from having instant Internet, network, and computer access with complete mobility and hands-free operations. These applications can be applied across many fields and disciplines including spacecraft fabrication, integration and testing (including environmental testing), and astronaut on-orbit control and monitoring of experiments with ground based experimenters. To satisfy the needs of NASA customers, this wearable computer needs to be connected to a wireless network, to transmit and receive real-time video over the network, and to receive updated documents via the Internet or NASA servers. The voice-activated computer, with a unique vocabulary, will allow the users to access documentation in a hands free environment and interact in real-time with remote users. We will discuss wearable computer development, hardware and software issues, wireless network limitations, video/audio solutions and difficulties in language development.
A heterogeneous hierarchical architecture for real-time computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skroch, D.A.; Fornaro, R.J.
The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.
Development of a support software system for real-time HAL/S applications
NASA Technical Reports Server (NTRS)
Smith, R. S.
1984-01-01
Methodologies employed in defining and implementing a software support system for the HAL/S computer language for real-time operations on the Shuttle are detailed. Attention is also given to the management and validation techniques used during software development and software maintenance. Utilities developed to support the real-time operating conditions are described. With the support system being produced on Cyber computers and executable code then processed through Cyber or PDP machines, the support system has a production level status and can serve as a model for other software development projects.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
"One-Stop Shopping" for Ocean Remote-Sensing and Model Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook
2006-01-01
OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for
Detecting strain in birefringent materials using spectral polarimetry
NASA Technical Reports Server (NTRS)
Garner, Harold R. (Inventor); Ragucci, Anthony J. (Inventor); Cisar, Alan J. (Inventor); Huebschman, Michael L. (Inventor)
2010-01-01
A method, computer program product and system for analyzing multispectral images from a plurality of regions of birefringent material, such as a polymer film, using polarized light and a corresponding polar analyzer to identify differential strain in the birefringent material. For example, the birefringement material may be low-density polyethylene (LDPE), high-density polyethylene (HDPE), polypropylene, polyethylene terephthalate (PET), polyvinyl chloride (PVC), polyvinylidene chloride, polyester, nylon, or cellophane film. Optionally, the method includes generating a real-time quantitative strain map.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Molthan, Andrew; Zavodsky, Bradley T.; Case, Jonathan L.; LaFontaine, Frank J.; Srikishen, Jayanthi
2010-01-01
The NASA Short-term Prediction Research and Transition Center (SPoRT)'s new "Weather in a Box" resources will provide weather research and forecast modeling capabilities for real-time application. Model output will provide additional forecast guidance and research into the impacts of new NASA satellite data sets and software capabilities. By combining several research tools and satellite products, SPoRT can generate model guidance that is strongly influenced by unique NASA contributions.
Acousto-ultrasonic system for the inspection of composite armored vehicles
NASA Astrophysics Data System (ADS)
Godinez, Valery F.; Carlos, Mark F.; Delamere, Michael; Hoch, William; Fotopoulos, Christos; Dai, Weiming; Raju, Basavaraju B.
2001-04-01
In this paper the design and implementation of a unique acousto-ultrasonics system for the inspection of composite armored vehicles is discussed. The system includes a multi-sensor probe with a position-tracking device mounted on a computer controlled scanning bridge. The system also includes an arbitrary waveform generator with a multiplexer and a multi-channel acoustic emission board capable of simultaneously collecting and processing up to four acoustic signals in real time. C-scans of an armored vehicle panel with defects are presented.
Crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.
1975-01-01
The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.
Rapid Sequencing of Complete env Genes from Primary HIV-1 Samples
Eren, Kemal; Ignacio, Caroline; Landais, Elise; Weaver, Steven; Phung, Pham; Ludka, Colleen; Hepler, Lance; Caballero, Gemma; Pollner, Tristan; Guo, Yan; Richman, Douglas; Poignard, Pascal; Paxinos, Ellen E.; Kosakovsky Pond, Sergei L.
2016-01-01
Abstract The ability to study rapidly evolving viral populations has been constrained by the read length of next-generation sequencing approaches and the sampling depth of single-genome amplification methods. Here, we develop and characterize a method using Pacific Biosciences’ Single Molecule, Real-Time (SMRT®) sequencing technology to sequence multiple, intact full-length human immunodeficiency virus-1 env genes amplified from viral RNA populations circulating in blood, and provide computational tools for analyzing and visualizing these data. PMID:29492273
NASA Astrophysics Data System (ADS)
Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart
2016-04-01
Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.
NASA Astrophysics Data System (ADS)
Hernandez, C.
2010-09-01
The weakness of small island electrical grids implies a handicap for the electrical generation with renewable energy sources. With the intention of maximizing the installation of photovoltaic generators in the Canary Islands, arises the need to develop a solar forecasting system that allows knowing in advance the amount of PV generated electricity that will be going into the grid, from the installed PV power plants installed in the island. The forecasting tools need to get feedback from real weather data in "real time" from remote weather stations. Nevertheless, the transference of this data to the calculation computer servers is very complicated with the old point to point telecommunication systems that, neither allow the transfer of data from several remote weather stations simultaneously nor high frequency of sampling of weather parameters due to slowness of the connection. This one project has developed a telecommunications infrastructure that allows sensorizadas remote stations, to send data of its sensors, once every minute and simultaneously, to the calculation server running the solar forecasting numerical models. For it, the Canary Islands Institute of Technology has added a sophisticated communications network to its 30 weather stations measuring irradiation at strategic sites, areas with high penetration of photovoltaic generation or that have potential to host in the future photovoltaic power plants connected to the grid. In each one of the stations, irradiance and temperature measurement instruments have been installed, over inclined silicon cell, global radiation on horizontal surface and room temperature. Mobile telephone devices have been installed and programmed in each one of the weather stations, which allow the transfer of their data taking advantage of the UMTS service offered by the local telephone operator. Every minute the computer server running the numerical weather forecasting models receives data inputs from 120 instruments distributed over the 30 radiometric stations. As a the result, currently it exist a stable, flexible, safe and economic infrastructure of radiometric stations and telecommunications that allows, on the one hand, to have data in real time from all 30 remote weather stations, and on the other hand allows to communicate with them in order to reprogram them and to carry out maintenance works.
Wyman, Alyssa Jayne; Vyse, Stuart
2008-07-01
The authors asked 52 college students (38 women, 14 men, M age = 19.3 years, SD = 1.3 years) to identify their personality summaries by using a computer-generated astrological natal chart when presented with 1 true summary and 1 bogus one. Similarly, the authors asked participants to identify their true personality profile from real and bogus summaries that the authors derived from the NEO Five-Factor Inventory (NEO-FFI; P. T. Costa Jr. & R. R. McCrae, 1985). Participants identified their real NEO-FFI profiles at a greater-than-chance level but were unable to identify their real astrological summaries. The authors observed a P. T. Barnum effect in the accuracy ratings of both psychological and astrological measures but did not find differences between the odd-numbered (i.e., favorable) signs and the even-numbered (i.e., unfavorable) signs.
Real-Time Detection and Tracking of Multiple People in Laser Scan Frames
NASA Astrophysics Data System (ADS)
Cui, J.; Song, X.; Zhao, H.; Zha, H.; Shibasaki, R.
This chapter presents an approach to detect and track multiple people ro bustly in real time using laser scan frames. The detection and tracking of people in real time is a problem that arises in a variety of different contexts. Examples in clude intelligent surveillance for security purposes, scene analysis for service robot, and crowd behavior analysis for human behavior study. Over the last several years, an increasing number of laser-based people-tracking systems have been developed in both mobile robotics platforms and fixed platforms using one or multiple laser scanners. It has been proved that processing on laser scanner data makes the tracker much faster and more robust than a vision-only based one in complex situations. In this chapter, we present a novel robust tracker to detect and track multiple people in a crowded and open area in real time. First, raw data are obtained that measures two legs for each people at a height of 16 cm from horizontal ground with multiple registered laser scanners. A stable feature is extracted using accumulated distribu tion of successive laser frames. In this way, the noise that generates split and merged measurements is smoothed well, and the pattern of rhythmic swinging legs is uti lized to extract each leg. Second, a probabilistic tracking model is presented, and then a sequential inference process using a Bayesian rule is described. A sequential inference process is difficult to compute analytically, so two strategies are presented to simplify the computation. In the case of independent tracking, the Kalman fil ter is used with a more efficient measurement likelihood model based on a region coherency property. Finally, to deal with trajectory fragments we present a concise approach to fuse just a little visual information from synchronized video camera to laser data. Evaluation with real data shows that the proposed method is robust and effective. It achieves a significant improvement compared with existing laser-based trackers.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
Flight code validation simulator
NASA Astrophysics Data System (ADS)
Sims, Brent A.
1996-05-01
An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.